In the last two posts of this series on implementing the Micro Frontends in Action samples in ASP.NET Core, I've focused on Blazor WebAssembly based Web Components as a way to achieve client-side composition. As a result, we have well-encapsulated frontend parts which can communicate with each other and the page. But there is a problem with the client-side rendered fragments, they appear after a delay. While the page loads, the user sees an empty placeholder. This is for sure a bad user experience, but it has even more serious consequences, those fragments may not be visible to search engine crawlers. In the case of something like a buy button, it is very important. So, how to deal with this problem? A possible answer is universal rendering.

What Is Universal Rendering?

Universal rendering is about combining server-side and client-side rendering in a way that enables having a single codebase for both purposes. The typical approach is to handle the initial HTML rendering on the server with help of the server-side composition and then, when the page is loaded in the browser, seamlessly rerender the fragments on the client side. The initial rendering should only generate the static markup, while the rerender brings the full functionality. When done properly, this allows for a fast First Contentful Paint while maintaining encapsulation.

The biggest challenge is usually the single codebase, which in this case means rendering Blazor WebAssembly based Web Components on the server.

Server-Side Rendering for Blazor WebAssembly Based Web Components

There is no standard approach to rendering Web Components on the server. Usually, that requires some creative solutions. But Blazor WebAssembly based Web Components are different because on the server they are Razor components and ASP.NET Core provides support for prerendering Razor components. This support comes in form of a Component Tag Helper. But, before we get to it, we need to modify the Checkout service so it can return the rendered HTML. This is where the choice of hosted deployment with ASP.NET Core will be beneficial. We can modify the hosting application to support Blazor WebAssembly and controllers with views.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddControllersWithViews();

var app = builder.Build();

...

app.UseBlazorFrameworkFiles();
app.UseStaticFiles();

app.UseRouting();

app.MapControllerRoute(
    name: "checkout-fragments",
    pattern: "fragment/buy/{sku}/{edition}",
    defaults: new { controller = "Fragments", action = "Buy" }
);

app.Run();

...

The controller for the defined route doesn't need any sophisticated logic, it only needs to pass the parameters to the view. For simplicity, I've decided to go with a dictionary as a model.

public class FragmentsController : Controller
{
    public IActionResult Buy(string sku, string edition)
    {
        IDictionary<string, string> model = new Dictionar<string, string>
        {
            { "Sku", sku },
            { "Edition", edition }
        };

        return View("Buy", model);
    }
}

The only remaining thing is the view which will be using the Component Tag Helper. In general, two pieces of information should be provided to this tag helper: the type of the component and the render mode. There are multiple render modes that render different markers to be used for later bootstrapping, but here we want to use the Static mode which renders only static HTML.

In addition to the component type and render mode, the Component Tag Helper also enables providing values for any component parameters with a param-{ParameterName} syntax. This is how we will pass the values from the model.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />

If we start the Checkout service and use a browser to navigate to the controller route, we will see an exception complaining about the absence of IBroadcastChannelService. At runtime Razor components are classes and ASP.NET Core will need to satisfy the dependencies while creating an instance. Sadly there is no support for optional dependencies. The options are either a workaround based on injecting IServiceProvider or making sure that the needed dependency is registered. I believe the latter to be more elegant.

...

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddBroadcastChannel();
builder.Services.AddControllersWithViews();

var app = builder.Build();

...

After this change, navigating to the controller route will display HTML, but in the case of the BuyButton, it is not exactly what we want. The BuyButton component contains the markup for a popup which is displayed upon clicking the button. The issue is, that the popup is hidden only with CSS. This is fine for the Web Component scenario (where the styles are already loaded when the component is being rendered) but not desired for this one. This is why I've decided to put a condition around the popup markup.

...

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
@if (_confirmationVisible)
{
    <div class="confirmation confirmation-visible">
        ...
    </div>
}

...

Now the HTML returned by the controller contains only the button markup.

Combining Server-Side and Client-Side Rendering

The Checkout service is now able to provide static HTML representing the BuyButton fragment, based on a single codebase. In the case of micro frontends that's not everything that is needed for universal rendering. The static HTML needs to be composed into the page before it's served. In this series, I've explored a single server-side composition technique based on YARP Transforms and Server-Side Includes), so I've decided to reuse it. First, I've copied the code for the body transform from the previous sample. Then, I modified the routing in the proxy to transform the request coming to the Decide service. The same as previously, I've created a dedicated route for static content so it doesn't go through the transform unnecessarily.

...

var routes = new[]
{
    ...
    new RouteConfig {
        RouteId = Constants.ROOT_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = "/" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    },
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID + "-static",
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/static/{**catch-all}" }
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    (new RouteConfig {
        RouteId = Constants.DECIDE_ROUTE_ID,
        ClusterId = Constants.DECIDE_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.DECIDE_ROUTE_PREFIX + "/{**catch-all}" },
        Metadata = SsiTransformProvider.SsiEnabledMetadata
    }).WithTransformPathRemovePrefix(Constants.DECIDE_ROUTE_PREFIX),
    ...
};

...

builder.Services.AddReverseProxy()
    .LoadFromMemory(routes, clusters);

...

Now I could modify the markup returned by the Decide service by placing the SSI directives inside the tag representing the Custom Element.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche" edition="standard">
        <!--#include virtual="/checkout/fragment/buy/porsche/standard" -->
      </checkout-buy>
    </div>
    ...
  </body>
</html>

This way the proxy can inject the static HTML into the markup while serving the initial response and once the JavaScript for Web Components is loaded they will be rerendered. We have achieved universal rendering.

What About Progressive Enhancements?

You might have noticed that there is a problem hiding in this solution. It's deceiving the users. The page looks like it's fully loaded but it's not interactive. There is a delay (until the JavaScript is loaded) before clicking the BuyButton has any effect. This is where progressive enhancements come into play.

I will not go into this subject further here, but one possible approach could be wrapping the button inside a form when the Checkout service is rendering static HTML.

@using Demo.AspNetCore.MicroFrontendsInAction.Checkout.Frontend.Components
@addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
@model IDictionary<string, string>

<form asp-controller="Checkout" asp-action="Buy" method="post">
    <input type="hidden" name="sku" valeu="@(Model["Sku"])">
    <input type="hidden" name="edition" valeu="@(Model["Edition"])">
    <component type="typeof(BuyButton)" render-mode="Static" param-Sku="@(Model["Sku"])" param-Edition="@(Model["Edition"])" />
</form>

Of course, that's not all the needed changes. The button would have to be rendered with submit type and the Checkout service needs to handle the POST request, redirect back to the product page, and manage the cart in the background.

If you are interested in doing that exercise, the sample code with universal rendering that you can use as a starter is available on GitHub.

One of the projects I'm currently working on is utilizing Azure Databricks for its machine learning component. The machine learning engineers working on the project wanted to use external IDEs for the development. Unfortunately, using external IDEs doesn't remove all needs for developing or testing directly in Azure Databricks. As we wanted our GitHub repository to be the only source of truth, we had to establish a commits promotion approach that would enable that.

Azure Databricks has support for Git integration, so we've decided to start by using it to integrate Azure Databricks with GitHub.

Configuring GitHub Credentials in Azure Databricks

The first step in setting up Git integration with Azure Databricks is credentials configuration. This is something that every engineer needs to do independently, to enable syncing workspace with a specific branch. It requires the following actions:

  1. Login to GitHub, click the profile picture and go to Settings and then Developer settings at the bottom.
  2. On the Settings / Developer settings switch to Personal access tokens and click Generate new token.
  3. Fill in the form:

    • Provide a recognizable Note for the token.
    • Set the Expiration corresponding to the expected time of work on the project.
    • Select the repo scope.

      GitHub - New Personal Access Token Form

  4. Click Generate token and copy the generated string.
  5. Launch the Azure Databricks workspace.
  6. Click the workspace name in the top right corner and then click the User Settings.
  7. On the Git Integration tab select GitHub, provide your username, paste the copied token, and click Save.

    Azure Databricks - Git Integration

Once the credentials to GitHub have been configured, the next step is the creation of an Azure Databricks Repo.

Creating Azure Databricks Repo Based on GitHub Repository

An Azure Databricks Repo is a clone of your remote Git repository (in this case GitHub repository) which can be managed through Azure Databricks UI. The creation process also happens through UI:

  1. Launch the Azure Databricks workspace.
  2. From the left menu choose Repos and then click Add Repo.
  3. Fill in the form:

    • Check the Create repo by cloning a Git repository.
    • Select GitHub as Git provider.
    • Provide the Git repository URL.
    • The Repository name will auto-populate, but you can modify it to your liking.

      Azure Databricks - Add Repo

  4. Click Submit.

And it's done. You can now select a branch next to the newly created Azure Databricks Repo. If you wish you can click the down arrow next to the repo/branch name and create a notebook, folder, or file. If the notebook you want to develop in has been already in the cloned repository, you can just select it and start developing.

Promoting Commits From Azure Databricks Repo to GitHub Repository

As I've already mentioned, Azure Databricks Repo is managed through the UI. The Git dialog is accessible through the down arrow next to the repo/branch name or directly from the notebook through a button placed next to the name of the notebook (the label of the button is the current Git branch name). From the Git dialog, you can commit and push changes to the GitHub repository.

Azure Databricks - Git Dialog

If you are interested in other manual operations, like pulling changes or resolving merge conflicts, they are well described in the documentation. I'm not going to describe their details here, because those are the operations we wanted to avoid by performing the majority of development in external IDEs and automating commits promotion from GitHub to Azure Databricks Repo.

Promoting Commits From GitHub Repository to Azure Databricks Repo

There are two ways to to manage Azure Databricks Repos programmatically: Repos API and Repos CLI. As GitHub-hosted runners doesn't come with preinstalled Databricks CLI, we've decided to go with Repos API and PowerShell.

We wanted a GitHub Actions workflow which would run on every push and update all Azure Databricks Repos mapped to the branch to which the push has happened. After going through API endpoints we came up with following flow.

GitHub Actions Workflow for Commits Promotion to Azure Databricks Repo

Before we could start the implementation there was one more missing aspect - authentication.

Azure Databricks can use an Azure AD service principal as an identity for an automated tool or a CI/CD process. Creation of a service principal and adding it to an Azure Databricks workspace is a multistep process, which is quite well described in the documentation. After going through it, you should be able to create the following actions secrets for your repository:

  • AZURE_SP_CLIENT_ID - Application (client) ID for the service principal.
  • AZURE_SP_TENANT_ID - Directory (tenant) ID for the service principal.
  • AZURE_SP_CLIENT_SECRET - Client secret for the service principal.
  • AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME - The Azure Databricks workspace instance name.

With help of the first three of those secrets and the Microsoft identity platform REST API, we can obtain an Azure AD access token for the service principal. The request we need to make looks like this.

https://login.microsoftonline.com/<AZURE_SP_TENANT_ID>/oauth2/v2.0/token
Content-Type: application/x-www-form-urlencoded

client_id=<AZURE_SP_CLIENT_ID>&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=<AZURE_SP_CLIENT_SECRET>

The magical scope value (the URL-encoded 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d/.default) is a programmatic identifier for Azure Databricks. The response to this request is a JSON object which contains the Azure AD access token in the access_token field. The PowerShell script to make the request and retrieve the token can look like the one below (assuming that the secrets have been put into environment variables).

$azureAdAccessTokenUri = "https://login.microsoftonline.com/$env:AZURE_SP_TENANT_ID/oauth2/v2.0/token"
$azureAdAccessTokenHeaders = @{ "Content-Type" = "application/x-www-form-urlencoded" }
$azureAdAccessTokenBody = "client_id=$env:AZURE_SP_CLIENT_ID&grant_type=client_credentials&scope=2ff814a6-3304-4ab8-85cb-cd0e6f879c1d%2F.default&client_secret=$env:AZURE_SP_CLIENT_SECRET"

$azureAdAccessTokenResponse = Invoke-RestMethod -Method POST -Uri $azureAdAccessTokenUri -Headers $azureAdAccessTokenHeaders -Body $azureAdAccessTokenBody
$azureAdAccessToken = $azureAdAccessTokenResponse.access_token

Having the token, we can start making requests against Repos API. The first request we want to make in our flow is for getting the repos.

$azureDatabricksReposUri = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos"
$azureDatabricksReposHeaders = @{ Authorization = "Bearer $azureAdAccessToken" }

$azureDatabricksReposResponse = Invoke-RestMethod -Method GET -Uri $azureDatabricksReposUri -Headers $azureDatabricksReposHeaders

The $azureDatabricksReposHeaders will be used for subsequent requests as well, because we assume that the access token shouldn't expire before all repos are updated (the default expiration time is ~60 minutes). There is one more assumption here - that there are no more than twenty repos. The results from the /repos endpoint are paginated (with twenty being the page size) which the above script ignores. If there are more than twenty repos, the script needs to be adjusted to handle that.

Once we have all the repos we can iterate through them and update those which have matching URL (in case different repositories than the current one has also been mapped) and branch (so we don't perform unnecessary updates).

$githubRepositoryUrl = $env:GITHUB_REPOSITORY_URL.replace("git://","https://")

foreach ($azureDatabricksRepo in $azureDatabricksReposResponse.repos)
{
    if (($azureDatabricksRepo.url -eq $githubRepositoryUrl) -and ($azureDatabricksRepo.branch -eq $env:GITHUB_BRANCH_NAME))
    {
    $azureDatabricksRepoId = $azureDatabricksRepo.id;
    $azureDatabricksRepoUri  = "https://$env:AZURE_DATABRICKS_WORKSPACE_INSTANCE_NAME/api/2.0/repos/$azureDatabricksRepoId"
    $updateAzureDatabricksRepoBody = @{ "branch" = $azureDatabricksRepo.branch }

    Invoke-RestMethod -Method PATCH -Uri $azureDatabricksRepoUri -Headers $azureDatabricksReposHeaders -Body ($updateAzureDatabricksRepoBody|ConvertTo-Json)
    }
}

The GITHUB_REPOSITORY_URL and GITHUB_BRANCH_NAME are being injected into environment variables from github context of the action.

That's all the logic we need, you can find the complete workflow here. Sadly, at least in our case, it has thrown the following error on the first run.

{"error_code":"PERMISSION_DENIED","message":"Missing Git | provider credentials. Go to User Settings > Git Integration to | add your personal access token."}

The error does make sense. After all, from the perspective of Azure Databricks, the service principal is a user and we have never configured GitHub credentials for that user. This raised two questions.

The first question was, which GitHub user should those credentials represent? This is where the concept of a GitHub machine user comes into play. A GitHub machine user is a GitHub personal account, separate from the GitHub personal accounts of engineers/developers in your organization. It should be created against a dedicated email provided by your IT department and used only for automation scenarios.

The second question was, how to configure the credentials. You can't launch the Azure Databricks workspace as the service principal user and do it through the UI. Luckily, Azure Databricks provides Git Credentials API which can be used for this task. You can use Postman (or any other tool of your preference) to first make the described above request for Azure AD access token, and then make the below request to configure the credentials.

https://<WORKSPACE_INSTANCE_NAME>/api/2.0/git-credentials
Content-Type: application/json

{
   "personal_access_token": "<GitHub Machine User Personal Access Token>",
   "git_username": "<GitHub Machine User Username>",
   "git_provider": "GitHub"
}

After this operation, the GitHub Actions workflow started working as expected.

What This Is Not

This is not CI/CD for Azure Databricks. This is just a process supporting daily development in the Azure Databricks context. If you are looking for CI/CD approaches to Azure Databricks, you can take a look here.

I'm continuing my series on implementing the Micro Frontends in Action samples in ASP.NET Core, and I'm continuing the subject of Blazor WebAssembly based Web Components. In the previous post, the project has been expanded with a new service that provides its fronted fragment as a Custom Element power by Blazor WebAssembly. In this post, I will explore how Custom Elements can communicate with other frontend parts.

There are three communication scenarios I would like to explore: passing information from page to Custom Element (parent to child), passing information from Custom Element to page (child to parent), and passing information between Custom Elements (child to child). Let's go through them one by one.

Page to Custom Element

When it comes to passing information from page to Custom Element, there is a standard approach that every web developer will expect. If I want to disable a button, I set an attribute. If I want to change the text on a button, I set an attribute. In general, if I want to change the state of an element, I set an attribute. The same expectation applies to Custom Elements. How to achieve that?

As mentioned in the previous post, the ES6 class, which represents a Custom Element, can implement a set of lifecycle methods. One of these methods is attributeChangedCallback. It will be invoked each time an attribute from a specified list is added, removed, or its value is changed. The list of the attributes which will result in invoking the attributeChangedCallback is defined by a value returned from observedAttributes static get method.

So, in the case of Custom Elements implemented in JavaScript, one has to implement the observedAttributes to return an array of attributes that can modify the state of the Custom Element and implement the attributeChangedCallback to modify that state. Once again, you will be happy to know that all this work has already been done in the case of Blazor WebAssembly. The Microsoft.AspNetCore.Components.CustomElements package, which wraps Blazor components as Custom Elements handles that. It provides an implementation of observedAttributes which returns all the properties marked as parameters, and an implementation of attributeChangedCallback which will update parameters values and give the component a chance to rerender. That makes the implementation quite simple.

I've added a new property named Edition to the BuyButton component, which I created in the previous post. The new property impacts the price depending if the client has chosen a standard or platinum edition. I've also marked the new property as a parameter.

<button type="button" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private IDictionary<string, Dictionary<string, int>> _prices = new Dictionary<string, Dictionary<string, int>>
    {
        { "porsche", new Dictionary<string, int> { { "standard", 66 }, { "platinum", 966 } } },
        { "fendt", new Dictionary<string, int> { { "standard", 54 }, { "platinum", 945 } }  },
        { "eicher", new Dictionary<string, int> { { "standard", 58 }, { "platinum", 958 } }  }
    };

    [Parameter]
    public string? Sku { get; set; }

    [Parameter]
    public string? Edition { get; set; }

    ...
}

This should be all from the component perspective. The rest should be only about using the attribute representing the property. First, I've added it to the markup served by the Decide service with the default value. I've also added a checkbox that allows choosing the edition.

<html>
    ...
    <body class="decide_layout">
        ...
        <div class="decide_details">
            <label class="decide_editions">
                <p>Material Upgrade?</p>
                <input type="checkbox" name="edition" value="platinum" />
                <span>Platinum<br />Edition</span>
                <img src="https://mi-fr.org/img/porsche_platinum.svg" />
            </label>
            <checkout-buy sku="porsche" edition="standard"></checkout-buy>
        </div>
        ...
    </body>
</html>

Then I implemented an event handler for the change event of that checkbox, where depending on its state, I would change the value of the edition attribute on the custom element.

(function() {
    ...
    const editionsInput = document.querySelector(".decide_editions input");
    ...
    const buyButton = document.querySelector("checkout-buy");

    ...

    editionsInput.addEventListener("change", e => {
        const edition = e.target.checked ? "platinum" : "standard";
        buyButton.setAttribute("edition", edition);
        ...
    });
})();

It worked without any issues. Checking and unchecking the checkbox would result in nicely displaying different prices on the button.

Custom Element to Page

The situation with passing information from Custom Element to the page is similar to passing information from page to Custom Element - there is an expected standard mechanism: events. If something important has occurred internally in the Custom Element and the external world should know about it, Custom Element should raise an event to which whoever is interested can subscribe.

How to raise a JavaScript event from Blazor? This requires calling a JavaScript function which will wrap a call to dispatchEvent. Why can't dispatchEvent be called directly? That's because Blazor requires function identifier to be relative to the global scope, while dispatchEvent needs to be called on an instance of an element. This raises another challenge. Our wrapper function will require a reference to the Custom Element. Blazor supports capturing references to elements to pass them to JavaScript. The @ref attribute can be included in HTML element markup, resulting in a reference being stored in the variable it is pointing to. This means that the reference to the Custom Element itself can't be passed directly, but a reference to its child element can.

I've written a wrapper function that takes the reference to the button element (but it could be any direct child of the Custom Element) as a parameter and then calls dispatchEvent on its parent.

window.checkout = (function () {
    return {
        dispatchItemAddedEvent: function (checkoutBuyChildElement) {
            checkoutBuyChildElement.parentElement.dispatchEvent(new CustomEvent("checkout:item_added"));
        }
    };
})();

I wanted the event to be raised when the button has been clicked, so I've modified the OnButtonClick to use injected IJSRuntime to call my JavaScript function. In the below code, you can also see the @ref attribute in action and how I'm passing that element reference to the wrapper function.

@using Microsoft.JSInterop

@inject IJSRuntime JS

<button type="button" @ref="_buttonElement" @onclick="OnButtonClick">
    buy for @(String.IsNullOrWhiteSpace(Sku) || String.IsNullOrWhiteSpace(Edition)  ? "???" : _prices[Sku][Edition])
</button>
...

@code {
    private ElementReference _buttonElement;

    ...

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        await JS.InvokeVoidAsync("checkout.dispatchItemAddedEvent", _buttonElement);
    }

    ...
}

For the whole thing to work, I had to reference the JavaScript from the Decide service markup so that the wrapper function could be called.

<html>
    ...
    <body class="decide_layout">
        ...
        <script src="/checkout/static/components.js"></script>
        <script src="/checkout/_content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js"></script>
        ...
    </body>
</html>

Now I could subscribe to the checkout:item_added event and add some bells and whistles whenever it's raised.

(function() {
    ...
    const productElement = document.querySelector(".decide_product");
    const buyButton = document.querySelector("checkout-buy");

    ...

    buyButton.addEventListener("checkout:item_added", e => {
        productElement.classList.add("decide_product--confirm");
    });

    ...
})();

Custom Element to Custom Element

Passing information between Custom Elements is where things get interesting. That is because there is no direct relation between Custom Elements. Let's assume that the Checkout service exposes a second Custom Element which provides a cart representation. The checkout button and mini-cart don't have to be used together. There might be a scenario where only one of them is present, or there might be scenarios where there are rendered by independent parents.

Of course, everything is happening in the browser's context, so there is always an option to search through the entire DOM tree. This is an approach that should be avoided. First, it's tight coupling, as it requires Custom Element to have detailed knowledge about other Custom Element. Second, it wouldn't scale. What if there are ten different types of Custom Elements to which information should be passed? That would require ten different searches.

Another option is leaving orchestration to the parent. The parent would listen to events from one Custom Element and change properties on the other. This breaks the separation of responsibilities as the parent (in our case, the Decide service) is now responsible for implementing logic that belongs to someone else (in our case, the Checkout Service).

What is needed is a communication channel that will enable a publish-subscribe pattern. This will ensure proper decoupling. The classic implementation of such a channel is events based bus. The publisher raises events with bubbling enabled (by default, it's not), so subscribers can listen for those events on the window object. This is an established approach, but it's not the one I've decided to implement. An events-based bus is a little bit "too public" for me. In the case of multiple Custom Elements communicating, there are a lot of events on the window object, and I would prefer more organization. Luckily, modern browsers provide an alternative way to implement such a channel - the Broadcast Channel API. You can think about Broadcast Channel API as a simple message bus that provides the capability of creating named channels. The hidden power of Broadcast Channel API is that it allows communication between windows/tabs, iframes, web workers, and service workers.

Using Broadcast Channel API in Blazor once again requires JavaScript interop. I've decided to use this opportunity to build a component library that provides easy access to it. I'm not going to describe the process of creating a component library in this post, but if you are interested just let me know and I'm happy to write a separate post about it. If you want to use the Broadcast Channel API, it's available on NuGet.

After building and publishing the component library, I referenced it in the Checkout project and registered the service it provides.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");

builder.Services.AddBroadcastChannel();

...

In the checkout button component, I've injected the service. The channel can be created by calling CreateOrJoinAsync, and I'm doing that in OnAfterRenderAsync. I've also made the component implement IAsyncDisposable, where the channel is disposed to avoid JavaScript memory leaks. The last part was calling PostMessageAsync as part of OnButtonClick to send the message to the channel. This completes the publisher.

...

@implements IAsyncDisposable

...
@inject IBroadcastChannelService BroadcastChannelService

...

@code {
    ...
    private IBroadcastChannel? _broadcastChannel;

    ...

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
        }
    }

    private async Task OnButtonClick(MouseEventArgs e)
    {
        ...

        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.PostMessageAsync(new CheckoutItem { Sku = Sku, Edition = Edition });
        }

        ...
    }

    ...

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The mini-cart component will be the subscriber. I've added there the same code for injecting the service, joining the channel, and disposing of it. The main difference here is that the component will subscribe to the channel Message event instead of sending anything. The BroadcastChannelMessageEventArgs contains the message which has been sent in the Data property as JsonDocument, which can be deserialized to the desired type. In the mini-cart component, I'm using the message to add items.

@using System.Text.Json;

@implements IAsyncDisposable

@inject IBroadcastChannelService BroadcastChannelService

@(_items.Count == 0  ? "Your cart is empty." : $"You've picked {_items.Count} tractors:")
@foreach (var item in _items)
{
    <img src="https://mi-fr.org/img/@(item.Sku)[email protected](item.Edition).svg" />
}

@code {
    private IList<CheckoutItem> _items = new List<CheckoutItem>();
    private IBroadcastChannel? _broadcastChannel;
    private JsonSerializerOptions _jsonSerializerOptions = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };

    protected override async Task OnAfterRenderAsync(bool firstRender)
    {
        if (firstRender)
        {
            _broadcastChannel = await BroadcastChannelService.CreateOrJoinAsync("checkout:item-added");
            _broadcastChannel.Message += OnMessage;
        }
    }

    private void OnMessage(object? sender, BroadcastChannelMessageEventArgs e)
    {
        _items.Add(e.Data.Deserialize<CheckoutItem>(_jsonSerializerOptions));

        StateHasChanged();
    }

    public async ValueTask DisposeAsync()
    {
        if (_broadcastChannel is not null)
        {
            await _broadcastChannel.DisposeAsync();
        }
    }
}

The last thing I did in the Checkout service was exposing the mini-cart component.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");
builder.RootComponents.RegisterAsCustomElement<MiniCart>("checkout-minicart");

builder.Services.AddBroadcastChannel();

...

Now the mini-cart could be included in the HTML owned by the Decide service.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche"></checkout-buy>
    </div>
    ...
    <div class="decide_summary">
      <checkout-minicart></checkout-minicart>
    </div>
    ...
  </body>
</html>

Playing With the Complete Sample

The complete sample is available on GitHub. You can run it locally by spinning up all the services, but I've also included a GitHub Actions workflow that can deploy the whole solution to Azure (you just need to fork the repository and provide your own credentials).

This is another post in my series on implementing the samples from Micro Frontends in Action in ASP.NET Core:

This time I'm jumping from server-side composition to client-side composition. I've used the word jumping because I haven't fully covered server-side composition yet. There is one more approach to server-side composition which I intend to cover later, but as lately I was doing some Blazor WebAssembly work I was more eager to write this one.

Expanding The Project

As you may remember from the first post, the project consists of two services that are hosted in Azure Container Apps.

Decide and Inspire Frontend Layout

Both services are using server-side rendering for their frontends, and the Decide service is loading Inspire service frontend fragment via Ajax. It's time to bring a new service to the picture, the Checkout service.

The Checkout service is responsible for checkout flow. As this flow is more sophisticated than what Decide and Inspire services provide, the Checkout service requires client-side rendering to provide the experience of a single page application. This requirement creates a need for isolation and encapsulation of the Checkout service frontend fragment. There is a suite of technologies that can help solve that problem - Web Components.

Web Components

Web Components aim at enabling web developers to create reusable elements with well-encapsulated functionality. To achieve that, they bring together four different specifications:

  • Custom Elements, which allows defining your own tags (custom elements) with their business logic.
  • Shadow DOM, which enables scripting and styling without collisions with other elements.
  • HTML Templates, which allows writing markup templates.
  • ES Module, which defines a consistent way for JavaScript inclusion and reuse.

The Custom Elements is the most interesting one in this context. The way to create a Custom Element is to implement an ES6 class that extends HTMLElement and register it via window.customElements.define. The class can also implement a set of lifecycle methods (constructor, connectedCallback, disconnectedCallback, or attributeChangedCallback). This allows for initializing a SPA framework (Angular, React, Vue, etc.) and instructing it to use this as a root for rendering.

This is exactly what is needed for the Checkout service, where the SPA framework will be Blazor WebAssembly.

Creating a Blazor WebAssembly Based Custom Element

The way Blazor WebAssembly works fits nicely with Custom Elements. If you've ever taken a look at the Program.cs of a Blazor WebAssembly project, you might have noticed some calls to builder.RootComponents.Add. This is because the WebAssembly to which your project gets compiled is designed to perform rendering into elements. Thanks to that a Blazor WebAssembly application can be wrapped into a Custom Element, it just requires proper initialization. You will be happy to learn, that this work has already been done. As part of AspLabs Steve Sanderson has prepared a package and instructions on how to make Blazor components available as Custom Elements. Let's do it.

I've started with an empty Blazor WebAssembly application hosted in ASP.NET Core, to which I've added a component that will server as a button initiating the checkout flow (the final version of that component will also contain some confirmation toast, if you're interested you can find it here).

<button type="button" @onclick="OnButtonClick">buy for @(String.IsNullOrWhiteSpace(Sku) ? "???" : _prices[Sku])</button>
...

@code {
    // Dictionary of tractor prices
    private IDictionary<string, int> _prices = new Dictionary<string, int>
    {
        { "porsche", 66 },
        { "fendt", 54 },
        { "eicher", 58 }
    };

    ...
}

Next, I've added the Microsoft.AspNetCore.Components.CustomElements package to the project.

<Project Sdk="Microsoft.NET.Sdk.BlazorWebAssembly">
  ...
  <ItemGroup>
    ...
    <PackageReference Include="Microsoft.AspNetCore.Components.CustomElements" Version="0.1.0-alpha.*" />
  </ItemGroup>
</Project>

I've removed all .razor files besides the above component and _Imports.razor. After that, I modified the Program.cs by removing the builder.RootComponents.Add calls and adding builder.RootComponents.RegisterAsCustomElement to expose my component as a checkout-buy element.

...

var builder = WebAssemblyHostBuilder.CreateDefault(args);

builder.RootComponents.RegisterAsCustomElement<BuyButton>("checkout-buy");

await builder.Build().RunAsync();

This is it. After the build/publish the service will server the custom element through the _content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js script, so it's time to plug it into the page served by Decide service.

Using a Blazor WebAssembly Based Custom Element

Before the Decide service can utilize the custom element, it is necessary to set up the routing. As described in previous posts, all services are hidden behind a YARP-based proxy, which routes the incoming requests based on prefixes (to avoid conflicts). So far, both services were built in a way where the prefixes were an integral part of their implementation (they were included in actions and static content paths). With the Checkout service that would be hard to achieve, due to Blazor WebAssembly static assets.

There is a way to control the base path for Blazor WebAssembly static assets (through StaticWebAssetBasePath project property) but it doesn't affect the BlazorCustomElements.js path. So, instead of complicating the service implementation to handle the prefix, it seems a lot better to make the prefixes a proxy concern and remove them there. YARP has an out-of-the-box capability to do so through PathRemovePrefix transform. There is a ready-to-use extension method (.WithTransformPathRemovePrefix) which allows adding that transform to a specific route.

...

var routes = new[]
{
    ...
    (new RouteConfig
    {
        RouteId = Constants.CHECKOUT_ROUTE_ID,
        ClusterId = Constants.CHECKOUT_CLUSTER_ID,
        Match = new RouteMatch { Path = Constants.CHECKOUT_ROUTE_PREFIX + "/{**catch-all}" }
    }).WithTransformPathRemovePrefix(Constants.CHECKOUT_ROUTE_PREFIX),
    ...
};

var clusters = new[]
{
    ...
    new ClusterConfig()
    {
        ClusterId = Constants.CHECKOUT_CLUSTER_ID,
        Destinations = new Dictionary<string, DestinationConfig>(StringComparer.OrdinalIgnoreCase)
        {
            { Constants.CHECKOUT_SERVICE_URL, new DestinationConfig() { Address = builder.Configuration[Constants.CHECKOUT_SERVICE_URL] } }
        }
    }
};

builder.Services.AddReverseProxy()
    .LoadFromMemory(routes, clusters);

...

Now the Decide service views can be modified to include the custom element. The first step is to add the static assets.

<html>
  <head>
    ...
    <link href="/checkout/static/components.css" rel="stylesheet" />
  </head>
  <body class="decide_layout">
    ...
    <script src="/checkout/_content/Microsoft.AspNetCore.Components.CustomElements/BlazorCustomElements.js"></script>
    <script src="/checkout/_framework/blazor.webassembly.js"></script>
  </body>
</html>

Sadly, this will not be enough for Blazor to work. When Blazor WebAssembly starts, it requests additional boot resources. They must be loaded from the Checkout service as well, which means including the prefix in URIs. This can be achieved thanks to the JS initializers feature which has been added in .NET 6. The automatic start of Blazor WebAssembly can be disabled, and it can be started manually which allows providing a function to customize the URIs.

<html>
  ...
  <body class="decide_layout">
    ...
    <script src="/checkout/_framework/blazor.webassembly.js" autostart="false"></script>
    <script>
      Blazor.start({
        loadBootResource: function (type, name, defaultUri, integrity) {
          return `/checkout/_framework/${name}`;
        }
      });
    </script>
  </body>
</html>

Finally, the custom element can be used. It will be available through a tag matching the name provided as a parameter to builder.RootComponents.RegisterAsCustomElement.

<html>
  ...
  <body class="decide_layout">
    ...
    <div class="decide_details">
      <checkout-buy sku="porsche"></checkout-buy>
    </div>
    ...
  </body>
</html>

The Expanded Project

As with previous samples, I've created a GitHub Actions workflow that deploys the solution to Azure. After the deployment, if you navigate to the URL of the ca-app-proxy Container App, you will see a page with following layout.

Decide, Inspire, and Checkout Frontend Layout

You can click the button rendered by the custom element and see the toast notification.

This approach highlights one of the benefits of micro frontends, the freedom to choose the fronted stack. Thanks to the encapsulation of the Blazor WebAssembly app into a custom element, the Decide service can host it in its static HTML without understanding anything except how to load the scripts. If we would want to hide even that (something I haven't done here) we could create a script that encapsulates Blazor WebAssembly loading and initialization. That script could be a shared asset that loads Blazor WebAssembly from CDN (which besides the performance benefit would be also a way to go for a solution with multiple services using Blazor WebAssembly).

Rate limiting (sometimes also referred to as throttling) is a key mechanism when it comes to ensuring API responsiveness and availability. By enforcing usage quotas, it can protect an API from issues like:

  • Denial Of Service (DOS) attacks
  • Degraded performance due to traffic spikes
  • Monopolization by a single consumer

Despite its importance, the typical approach to rate limiting is far from perfect when it comes to communicating usage quotas by services and (as a result) respecting those quotas by clients. It shouldn't be a surprise that various services were experimenting with different approaches to solve this problem. The common pattern in the web world is that some of such experiments start to gain traction which results in standardization efforts. This is exactly what is currently happening around communicating services usage quotas with RateLimit Fields for HTTP Internet-Draft. As rate limiting will have built-in support with .NET 7, I thought it might be a good time to take a look at what this potential standard is bringing. But before that, let's recall how HTTP currently supports rate limiting (excluding custom extensions).

Current HTTP Support for Rate Limiting

When it comes to current support for rate limiting in HTTP, it's not much. If a service detects that a client has reached the quota, instead of a regular response it may respond with 429 (Too Many Request) or 503 (Service Unavailable). Additionally, the service may include a Retry-After header in the response to indicate how long the client should wait before making another request. That's it. It means that client can be only reactive. There is no way for a client to get the information about the quota to avoid hitting it.

In general, this works. That said, handling requests which are above the quota still consumes some resources on the service side. Clients would also prefer to be able to understand the quota and adjust their usage patterns instead of handling it as an exceptional situation. So as I said, it's not much.

Proposed Rate Limit Headers

The RateLimit Fields for HTTP Internet-Draft proposes four new headers which aim at enabling a service to communicate usage quotas and policies:

  • RateLimit-Limit to communicate the total quota within a time window.
  • RateLimit-Remaining to communicate the remaining quota within the current time window.
  • RateLimit-Reset to communicate the time (in seconds) remaining in the current time window.
  • RateLimit-Policy to communicate the overall quota policy.

The most interesting one is RateLimit-Policy. It is a list of quota policy items. A quota policy item consists of a quota limit and a single required parameter w which provides a time window in seconds. Custom parameters are allowed and should be treated as comments. Below you can see an example of RateLimit-Policy which informs that client is allowed to make 10 requests per second, 50 requests per minute, 1000 requests per hour, and 5000 per 24 hours.

RateLimit-Policy: 10;w=1, 50;w=60, 1000;w=3600, 5000;w=86400

The only headers which intend to be required are RateLimit-Limit and RateLimit-Reset (RateLimit-Remaining is strongly recommended). So, how an ASP.NET Core based service can server those headers.

Communicating Quotas When Using ASP.NET Core Middleware

As I've already mentioned, built-in support for rate limiting comes to .NET with .NET 7. It brings generic purpose primitives for writing rate limiters as well as a few ready-to-use implementations. It also brings a middleware for ASP.NET Core. The below example shows the definition of a fixed time window policy which allows 5 requests per 10 seconds. The OnRejected callback is also provided to return 429 (Too Many Request) status code and set the Retry-After header value based on provided metadata.

using System.Globalization;
using System.Threading.RateLimiting;
using Microsoft.AspNetCore.RateLimiting;

var builder = WebApplication.CreateBuilder(args);

var app = builder.Build();

app.UseHttpsRedirection();

app.UseRateLimiter(new RateLimiterOptions
{
    OnRejected = (context, cancellationToken) =>
    {
        if (context.Lease.TryGetMetadata(MetadataName.RetryAfter, out var retryAfter))
        {
            context.HttpContext.Response.Headers.RetryAfter = ((int)retryAfter.TotalSeconds).ToString(NumberFormatInfo.InvariantInfo);
        }

        context.HttpContext.Response.StatusCode = StatusCodes.Status429TooManyRequests;

        return new ValueTask();
    }
}.AddFixedWindowLimiter("fixed-window", new FixedWindowRateLimiterOptions(
    permitLimit: 5,
    queueProcessingOrder: QueueProcessingOrder.OldestFirst,
    queueLimit: 0,
    window: TimeSpan.FromSeconds(10),
    autoReplenishment: true
)));

app.MapGet("/", context => context.Response.WriteAsync("-- Demo.RateLimitHeaders.AspNetCore.RateLimitingMiddleware --"))
    .RequireRateLimiting("fixed-window");

app.Run();

The question is, if and how this can be extended to return the rate limit headers? The answer is, sadly, that there seems to be no way to provide the required ones right now. All the information about rate limit policies is well hidden from public access. It would be possible to provide RateLimit-Limit and RateLimit-Policy as they are a direct result of provided options. It is also possible to provide RateLimit-Remaining, but it requires rewriting a lot of the middleware ecosystem to get the required value. What seems completely impossible to get right now is RateLimit-Reset as timers are managed centrally deep in System.Threading.RateLimiting core without any access to their state. There is an option to provide your own timers, but it would mean rewriting the entire middleware stack and taking a lot of responsibility from System.Threading.RateLimiting. Let's hope that things will improve.

Communicating Quotas When Using AspNetCoreRateLimit Package

That built-in support for rate limiting is something that is just coming to .NET. So far the ASP.NET Core developers were using their own implementations or non-Microsoft packages for this purpose. Arguably, the most popular rate limiting solution for ASP.NET Core is AspNetCoreRateLimit. The example below provides similar functionality to the one from the built-in rate limiting example.

using AspNetCoreRateLimit;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddMemoryCache();

builder.Services.Configure<IpRateLimitOptions>(options =>
{
    options.EnableEndpointRateLimiting = true;
    options.StackBlockedRequests = false;
    options.HttpStatusCode = 429;
    options.GeneralRules = new List<RateLimitRule>
    {
        new RateLimitRule { Endpoint = "*", Period = "10s", Limit = 5 }
    };
});

builder.Services.AddInMemoryRateLimiting();

builder.Services.AddSingleton<IRateLimitConfiguration, RateLimitConfiguration>();

var app = builder.Build();

app.UseHttpsRedirection();

app.UseIpRateLimiting();

app.MapGet("/", context => context.Response.WriteAsync("-- Demo.RateLimitHeaders.AspNetCore.RateLimitPackage --"));

app.Run();

The AspNetCoreRateLimit has its own custom way of communicating quotas with HTTP headers. In the case of the above example, you might receive the following values in response.

X-Rate-Limit-Limit: 10s
X-Rate-Limit-Remaining: 4
X-Rate-Limit-Reset: 2022-07-24T11:30:47.2291052Z

As you can see, they provide potentially useful information but not in a way that RateLimit Fields for HTTP is going for. Luckily, AspNetCoreRateLimit is not as protective about its internal state and algorithms, the information needed can be accessed and served in a different way.

The information about the current state is kept in IRateLimitCounterStore. This is a dependency that could be accessed directly, but the method for generating needed identifiers lives in ProcessingStrategy so it will be better to create an implementation of it dedicated for purpose of just getting the counters state.

internal interface IRateLimitHeadersOnlyProcessingStrategy : IProcessingStrategy
{ }

internal class RateLimitHeadersOnlyProcessingStrategy : ProcessingStrategy, IRateLimitHeadersOnlyProcessingStrategy
{
    private readonly IRateLimitCounterStore _counterStore;
    private readonly IRateLimitConfiguration _config;

    public RateLimitHeadersOnlyProcessingStrategy(IRateLimitCounterStore counterStore, IRateLimitConfiguration config) : base(config)
    {
        _counterStore = counterStore;
        _config = config;
    }

    public override async Task ProcessRequestAsync(ClientRequestIdentity requestIdentity, RateLimitRule rule,
        ICounterKeyBuilder counterKeyBuilder, RateLimitOptions rateLimitOptions, CancellationToken cancellationToken = default)
    {
        string rateLimitCounterId = BuildCounterKey(requestIdentity, rule, counterKeyBuilder, rateLimitOptions);

        RateLimitCounter? rateLimitCounter = await _counterStore.GetAsync(rateLimitCounterId, cancellationToken);
        if (rateLimitCounter.HasValue)
        {
            return new RateLimitCounter
            {
                Timestamp = rateLimitCounter.Value.Timestamp,
                Count = rateLimitCounter.Value.Count
            };
        }
        else
        {
            return new RateLimitCounter
            {
                Timestamp = DateTime.UtcNow,
                Count = _config.RateIncrementer?.Invoke() ?? 1
            };
        }
    }
}

The second thing that is needed are rules which apply to specific endpoint and identity. Those can be retrieved from specific (either based on IP or client identifier) IRateLimitProcessor. The IRateLimitProcessor is also a tunnel to IProcessingStrategy, so it's nice we have a dedicated one. But what about the identity I've just mentioned? The algorithm to retrieve it lies in RateLimitMiddleware, so access to it will be needed. There are two options here. One is to inherit from RateLimitMiddleware and the other is to create an instance of one of its implementation and use it as a dependency. The first case would require hiding the base implementation of Invoke as it can't be overridden. I didn't like that, so I went with keeping an instance as a dependency approach. This led to the following code.

internal class IpRateLimitHeadersMiddleware
{
    private readonly RequestDelegate _next;
    private readonly RateLimitOptions _rateLimitOptions;
    private readonly IpRateLimitProcessor _ipRateLimitProcessor;
    private readonly IpRateLimitMiddleware _ipRateLimitMiddleware;

    public IpRateLimitHeadersMiddleware(RequestDelegate next,
        IRateLimitHeadersOnlyProcessingStrategy processingStrategy, IOptions<IpRateLimitOptions> options, IIpPolicyStore policyStore,
        IRateLimitConfiguration config, ILogger<IpRateLimitMiddleware> logger)
    {
        _next = next;
        _rateLimitOptions = options?.Value;
        _ipRateLimitProcessor = new IpRateLimitProcessor(options?.Value, policyStore, processingStrategy);
        _ipRateLimitMiddleware = new IpRateLimitMiddleware(next, processingStrategy, options, policyStore, config, logger);
    }

    public async Task Invoke(HttpContext context)
    {
        ClientRequestIdentity identity = await _ipRateLimitMiddleware.ResolveIdentityAsync(context);

        if (!_ipRateLimitProcessor.IsWhitelisted(identity))
        {
            var rateLimitRulesWithCounters = new Dictionary<RateLimitRule, RateLimitCounter>();

            foreach (var rateLimitRule in await _ipRateLimitProcessor.GetMatchingRulesAsync(identity, context.RequestAborted))
            {
                rateLimitRulesWithCounters.Add(
                    rateLimitRule,
                    await _ipRateLimitProcessor.ProcessRequestAsync(identity, rateLimitRule, context.RequestAborted)
                 );
            }
        }

        await _next.Invoke(context);

        return;
    }
}

The rateLimitRulesWithCounters contains all the rules applying to the endpoint in the context of the current request. This can be used to calculate the rate limit headers values.

internal class IpRateLimitHeadersMiddleware
{
    private class RateLimitHeadersState
    {
        public HttpContext Context { get; set; }

        public int Limit { get; set; }

        public int Remaining { get; set; }

        public int Reset { get; set; }

        public string Policy { get; set; } = String.Empty;

        public RateLimitHeadersState(HttpContext context)
        {
            Context = context;
        }
    }

    ...

    public async Task Invoke(HttpContext context)
    {
        ...
    }

    private RateLimitHeadersState PrepareRateLimitHeaders(HttpContext context, Dictionary<RateLimitRule, RateLimitCounter> rateLimitRulesWithCounters)
    {
        RateLimitHeadersState rateLimitHeadersState = new RateLimitHeadersState(context);

        var rateLimitHeadersRuleWithCounter = rateLimitRulesWithCounters.OrderByDescending(x => x.Key.PeriodTimespan).FirstOrDefault();
        var rateLimitHeadersRule = rateLimitHeadersRuleWithCounter.Key;
        var rateLimitHeadersCounter = rateLimitHeadersRuleWithCounter.Value;

        rateLimitHeadersState.Limit = (int)rateLimitHeadersRule.Limit;

        rateLimitHeadersState.Remaining = rateLimitHeadersState.Limit - (int)rateLimitHeadersCounter.Count;

        rateLimitHeadersState.Reset = (int)(
            (rateLimitHeadersCounter.Timestamp+ (rateLimitHeadersRule.PeriodTimespan ?? rateLimitHeadersRule.Period.ToTimeSpan())) - DateTime.UtcNow
            ).TotalSeconds;

        rateLimitHeadersState.Policy = String.Join(
            ", ",
            rateLimitRulesWithCounters.Keys.Select(rateLimitRule =>
                $"{(int)rateLimitRule.Limit};w={(int)(rateLimitRule.PeriodTimespan ?? rateLimitRule.Period.ToTimeSpan()
            ).TotalSeconds}")
        );

        return rateLimitHeadersState;
    }
}

The only thing that remains is setting the headers on the response.

internal class IpRateLimitHeadersMiddleware
{
    ...

    public async Task Invoke(HttpContext context)
    {
        ...

        if (!_ipRateLimitProcessor.IsWhitelisted(identity))
        {
            ...

            if (rateLimitRulesWithCounters.Any() && !_rateLimitOptions.DisableRateLimitHeaders)
            {
                context.Response.OnStarting(
                    SetRateLimitHeaders,
                    state: PrepareRateLimitHeaders(context, rateLimitRulesWithCounters)
                );
            }
        }

        ...
    }

    ...

    private Task SetRateLimitHeaders(object state)
    {
        var rateLimitHeadersState = (RateLimitHeadersState)state;

        rateLimitHeadersState.Context.Response.Headers["RateLimit-Limit"] = rateLimitHeadersState.Limit.ToString(CultureInfo.InvariantCulture);
        rateLimitHeadersState.Context.Response.Headers["RateLimit-Remaining"] = rateLimitHeadersState.Remaining.ToString(CultureInfo.InvariantCulture);
        rateLimitHeadersState.Context.Response.Headers["RateLimit-Reset"] = rateLimitHeadersState.Reset.ToString(CultureInfo.InvariantCulture);
        rateLimitHeadersState.Context.Response.Headers["RateLimit-Policy"] = rateLimitHeadersState.Policy;

        return Task.CompletedTask;
    }
}

After registering the RateLimitHeadersOnlyProcessingStrategy and IpRateLimitHeadersMiddleware (I've registered it after the IpRateLimitMiddleware) response will contain values similar to the following ones.

RateLimit-Limit: 5
RateLimit-Remaining: 4
RateLimit-Reset: 9
RateLimit-Policy: 5;w=10
X-Rate-Limit-Limit: 10s
X-Rate-Limit-Remaining: 4
X-Rate-Limit-Reset: 2022-07-25T20:57:32.0746592Z

The code works but certainly isn't perfect, so I've created an issue in hope that AspNetCoreRateLimit will get those headers built in.

Limiting the Number of Outbound Requests in HttpClient

The general rule around rate limit headers is that they should be treated as informative, so the client doesn't have to do anything specific with them. They are also described as generated at response time without any guarantee of consistency between requests. This makes perfect sense. In the simple examples above, multiple clients would be competing for the same quota so received headers values don't exactly tell how many requests a specific client can make within a given window. But, real-life scenarios are usually more specific and complex. It's very common for quotas to be per client or per IP address (this is why AspNetCoreRateLimit has concepts like request identity as a first-class citizen, the ASP.NET Core built-in middleware also enables sophisticated scenarios by using PartitionedRateLimiter at its core). In a such scenario, the client might want to use rate limit headers to avoid making requests which have a high likelihood of being throttled. Let's explore that, below is a simple code that can handle 429 (Too Many Request) responses and utilize the Retry-After header.

HttpClient client = new();
client.BaseAddress = new("http://localhost:5262");

while (true)
{
    Console.Write("{0:hh:mm:ss}: ", DateTime.UtcNow);

    int nextRequestDelay = 1;

    try
    {
        HttpResponseMessage response = await client.GetAsync("/");
        if (response.IsSuccessStatusCode)
        {
            Console.WriteLine(await response.Content.ReadAsStringAsync());
        }
        else
        {
            Console.Write($"{(int)response.StatusCode}: {await response.Content.ReadAsStringAsync()}");

            string? retryAfter = response.Headers.GetValues("Retry-After").FirstOrDefault();
            if (Int32.TryParse(retryAfter, out nextRequestDelay))
            {
                Console.Write($" | Retry-After: {nextRequestDelay}");
            }

            Console.WriteLine();
        }
    }
    catch (Exception ex)
    {
        Console.WriteLine(ex.Message);
    }

    await Task.Delay(TimeSpan.FromSeconds(nextRequestDelay));
}

Let's assume that the service is sending all rate limit headers and that they are dedicated to the client. We can rate limit the HttpClient by creating a DelegatingHandler which will read the RateLimit-Policy header value and instantiate a FixedWindowRateLimiter based on it. The FixedWindowRateLimiter will be used to rate limit the outbound requests - if a lease can't be acquired a locally created HttpResponseMessage will be returned.

internal class RateLimitPolicyHandler : DelegatingHandler
{
    private string? _rateLimitPolicy;
    private RateLimiter? _rateLimiter;

    private static readonly Regex RATE_LIMIT_POLICY_REGEX = new Regex(@"(\d+);w=(\d+)", RegexOptions.Compiled);

    public RateLimitPolicyHandler() : base(new HttpClientHandler())
    { }

    protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
    {
        if (_rateLimiter is not null)
        {
            using var rateLimitLease = await _rateLimiter.WaitAsync(1, cancellationToken);
            if (rateLimitLease.IsAcquired)
            {
                return await base.SendAsync(request, cancellationToken);
            }

            var rateLimitResponse = new HttpResponseMessage(HttpStatusCode.TooManyRequests);
            rateLimitResponse.Content = new StringContent($"Service rate limit policy ({_rateLimitPolicy}) exceeded!");

            if (rateLimitLease.TryGetMetadata(MetadataName.RetryAfter, out var retryAfter))
            {
                rateLimitResponse.Headers.Add("Retry-After", ((int)retryAfter.TotalSeconds).ToString(NumberFormatInfo.InvariantInfo));
            }

            return rateLimitResponse;
        }

        var response = await base.SendAsync(request, cancellationToken);

        if (response.Headers.Contains("RateLimit-Policy"))
        {
            _rateLimitPolicy = response.Headers.GetValues("RateLimit-Policy").FirstOrDefault();

            if (_rateLimitPolicy is not null)
            {
                Match rateLimitPolicyMatch = RATE_LIMIT_POLICY_REGEX.Match(_rateLimitPolicy);

                if (rateLimitPolicyMatch.Success)
                {
                    int limit = Int32.Parse(rateLimitPolicyMatch.Groups[1].Value);
                    int window = Int32.Parse(rateLimitPolicyMatch.Groups[2].Value);

                    _rateLimiter = new FixedWindowRateLimiter(new FixedWindowRateLimiterOptions(
                        limit,
                        QueueProcessingOrder.NewestFirst,
                        0,
                        TimeSpan.FromSeconds(window),
                        true
                    ));

                    string? rateLimitRemaining = response.Headers.GetValues("RateLimit-Remaining").FirstOrDefault();
                    if (Int32.TryParse(rateLimitRemaining, out int remaining))
                    {
                        using var rateLimitLease = await _rateLimiter.WaitAsync(limit - remaining, cancellationToken);
                    }
                }
            }
        }

        return response;
    }
}

The above code also uses the RateLimit-Remaining header value to acquire leases for requests which are no longer available in the initial window.

Now depending if the sample code is run with the RateLimitPolicyHandler in the HttpClient pipeline or not, the console output will be different as those 429 will be coming from a different place.

Opinions

The rate limit headers seem like an interesting addition for communicating services usage quotas. Properly used in the right situations might be a useful tool, it is just important not to treat them as guarantees.

Serving rate limit headers from ASP.NET Core right now has its challenges. If they will become a standard and gain popularity, I think this will change.

If you want to play with the samples, they are available on GitHub.

Older Posts