There is a number of Web APIs which allow measuring performance of web applications:

The youngest member of the family is Server Timing API which allows communicating the server performance metrics to the client. The API is not widely supported yet, but the Chrome Devtools is able to interpret the information send from the server and expose it as part of request timing information. Let's see how this feature can be utilized from ASP.NET Core.

Basics of Server Timing API

The Server Timing definition of metric can be represented by following structure.

public struct ServerTimingMetric
{
    private string _serverTimingMetric;

    public string Name { get; }

    public decimal? Value { get; }

    public string Description { get; }

    public ServerTimingMetric(string name, decimal? value, string description)
    {
        if (String.IsNullOrEmpty(name))
            throw new ArgumentNullException(nameof(name));

        Name = name;
        Value = value;
        Description = description;

        _serverTimingMetric = null;
    }

    public override string ToString()
    {
        if (_serverTimingMetric == null)
        {
            _serverTimingMetric = Name;

            if (Value.HasValue)
                _serverTimingMetric = _serverTimingMetric + "=" + Value.Value.ToString(CultureInfo.InvariantCulture);

            if (!String.IsNullOrEmpty(Description))
                _serverTimingMetric = _serverTimingMetric + ";\"" + Description + "\"";
        }

        return _serverTimingMetric;
    }
}

The only required property is name, which means that metric can be used for indication that something has happened without any related duration information.

The metrics are delivered to the client through Server-Timing response header. The header may occur multiple times in the response, which means that multiple metrics can be delivered through multiple headers or as single comma-separated list (or combination of both). A class representing the header value could look like below.

public class ServerTimingHeaderValue
{
    public ICollection<ServerTimingMetric> Metrics { get; }

    public ServerTimingHeaderValue()
    {
        Metrics = new List<ServerTimingMetric>();
    }

    public override string ToString()
    {
        return String.Join(",", Metrics);
    }
}

Knowing how to construct the header we can try to feed the Chrome Devtools with some information. First we can write an extension method which will simplify adding header to the response.

public static class HttpResponseHeadersExtensions
{
    public static void SetServerTiming(this HttpResponse response, params ServerTimingMetric[] metrics)
    {
        ServerTimingHeaderValue serverTiming = new ServerTimingHeaderValue();

        foreach (ServerTimingMetric metric in metrics)
        {
            serverTiming.Metrics.Add(metric);
        }

        response.Headers.Append("Server-Timing", serverTiming.ToString());
    }
}

Now we can create an empty web application and use the extension method for setting some metrics.

public class Startup
{
    ...

    public void Configure(IApplicationBuilder app)
    {
        ...

        app.Run(async (context) =>
        {
            context.Response.SetServerTiming(
                new ServerTimingMetric("cache", 300, "Cache"),
                new ServerTimingMetric("sql", 900, "Sql Server"),
                new ServerTimingMetric("fs", 600, "FileSystem"),
                new ServerTimingMetric("cpu", 1230, "Total CPU")
            );

            await context.Response.WriteAsync("-- Demo.AspNetCore.ServerTiming --");
        });
    }
}

After hitting F5 and navigating to the demo application in Chrome the metrics should be visible in the Chrome Devtools.

Chrome Network Tab - Server Timing

Making it more usable

The above demo shows that Server Timing API works, but from developer perspective we would want an easy way for providing metrics from different places in the application. In case of ASP.NET Core it usually means middleware and service.

The service can be quite simple, it just needs to expose the collection of metrics

public interface IServerTiming
{
    ICollection<ServerTimingMetric> Metrics { get; }
}

internal class ServerTiming : IServerTiming
{
    public ICollection<ServerTimingMetric> Metrics { get; }

    public ServerTiming()
    {
        Metrics = new List<ServerTimingMetric>();
    }
}

The important part is that metrics needs to be collected per request. This can be achieved by properly scoping the service at registration.

public static class ServerTimingServiceCollectionExtensions
{
    public static IServiceCollection AddServerTiming(this IServiceCollection services)
    {
        services.AddScoped<IServerTiming, ServerTiming>();

        return services;
    }
}

The missing part is the middleware which will set the Server-Timing header with the metrics gathered by the service. The tricky part is that the header value should be set as late as possible (so there is a chance for other components in pipeline to provide metrics). Setting the header value before invoking next step in pipeline would be usually to early while trying to do so after that might result in error as headers could have already been sent to client. The solution to this challenge is HttpResponse.OnStarting method which allows adding a delegate which will be invoked just before sending the response headers.

public class ServerTimingMiddleware
{
    private readonly RequestDelegate _next;

    private static Task _completedTask = Task.FromResult<object>(null);

    public ServerTimingMiddleware(RequestDelegate next)
    {
        _next = next ?? throw new ArgumentNullException(nameof(next));
    }

    public Task Invoke(HttpContext context)
    {
        HandleServerTiming(context);

        return _next(context);
    }

    private void HandleServerTiming(HttpContext context)
    {
        context.Response.OnStarting(() => {
            IServerTiming serverTiming = context.RequestServices.GetRequiredService<IServerTiming>();

            if (serverTiming.Metrics.Count > 0)
            {
                context.Response.SetServerTiming(serverTiming.Metrics.ToArray());
            }

            return _completedTask;
        });
    }
}

Below is the same demo as previously but based on middleware and service. The result is exactly the same, but now the service is accessible through DI which allows for easy gathering of metrics.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddServerTiming();
    }

    public void Configure(IApplicationBuilder app)
    {
        ...

        app.UseServerTiming()
            .Run(async (context) =>
            {
                IServerTiming serverTiming = context.RequestServices
                    .GetRequiredService<IServerTiming>();

                serverTiming.Metrics.Add(new ServerTimingMetric("cache", 300, "Cache"));
                serverTiming.Metrics.Add(new ServerTimingMetric("sql", 900, "Sql Server"));
                serverTiming.Metrics.Add(new ServerTimingMetric("fs", 600, "FileSystem"));
                serverTiming.Metrics.Add(new ServerTimingMetric("cpu", 1230, "Total CPU"));

                await context.Response.WriteAsync("-- Demo.AspNetCore.ServerTiming --");
            });
    }
}

It is important to remember that it is the server who is in full control of which metrics are communicated to the client and when, which may mean that the middleware (or metrics gathering) should be used conditionally.

I've made all the mentioned here classes (and some more) available on GitHub and NuGet ready to use.

I have a couple of small open source projects out there. For me the hardest part of getting such project into state which allows others to use it effectively is creating documentation - I only have enough discipline to put triple-slash-comments on the public API. In the past I've been using Sandcastle Help File Builder to create help files based on that, but it slowly starts to feel heavy and outdate. So when Microsoft announced the move of .NET Framework docs to docs.microsoft.com with information that it is being powered by DocFX I've decided this is what I want to try next time I will have to set up a documentation for a project. Also, based on my previous experience, I've set some requirements:

  • The documentation needs to be part of the Visual Studio solution.
  • The documentation should be generated on build.
  • The documentation should be previewable from Visual Studio.

When Lib.AspNetCore.Mvc.JqGrid reached the v1.0.0 I've got the opportunity to try to achieve this.

Dedicated project for documentation

I wanted to keep the documentation as part of the solution, but at the same time I didn't want it to pollute the existing projects. Creating separated project just for the documentation seemed like a good idea, I just needed to decide on the type of project. DocFx generates the documentation as website so web application project felt natural. It also helped to address the "previewable from Visual Studio" requirement. The built-in preview functionality of DocFx requires going to command line (yes I could try to address that with PostcompileScript target), with web application project all I need is F5 key. I've created an empty ASP.NET Core Web Application and enabled static files support.

public class Startup
{
    ...

    public void Configure(IApplicationBuilder app)
    {
        app.UseDefaultFiles()
            .UseStaticFiles();
    }
}

Setting up DocFx

The DocFx for Visual Studio is available in form of docfx.console package. The moment you install the package it will attempt to generate documentation when project is being build. This means that the build will start crashing because the docfx.json file is missing. After consulting the Doc​FX User Manual I've came up with following file:

{
  "metadata": [
    {
      "src": [
        {
          "files": [
            "Lib.AspNetCore.Mvc.JqGrid.Infrastructure/Lib.AspNetCore.Mvc.JqGrid.Infrastructure.csproj",
            "Lib.AspNetCore.Mvc.JqGrid.Core/Lib.AspNetCore.Mvc.JqGrid.Core.csproj",
            "Lib.AspNetCore.Mvc.JqGrid.DataAnnotations/Lib.AspNetCore.Mvc.JqGrid.DataAnnotations.csproj",
            "Lib.AspNetCore.Mvc.JqGrid.Helper/Lib.AspNetCore.Mvc.JqGrid.Helper.csproj"
          ],
          "exclude": [ "**/bin/**", "**/obj/**" ],
          "src": ".."
        }
      ],
      "dest": "api"
    }
  ],
  "build": {
    "content": [
      {
        "files": [ "api/*.yml" ]
      }
    ],
    "dest": "wwwroot"
  }
}

The metadata section tells DocFx what it should use for generating the API documentation. The src property inside src section allows for setting the base folder for files property, while the files property should point to the projects which will be used for generation. The dest property defines the output folder for generation process. This is not the documentation yet. The actual documentation is being created in the second step which is configured through build section. The content tells DocFx what to use for the website. Here we should point to the output of the previous step and any other documents we want to include. The dest property is where the final website will be available - as this is a context of web application I've targeted wwwroot.

Building the documentation project resulted in disappointment in form of "Cache is not valid" and "No metadata is generated" errors. What's worst the problem was not easy to diagnose as those errors are being reported for a number of different issues. After spending considerable amount of time looking for my specific issue I've stumbled upon DocFx v2.16 release notes which introduced new TargetFramework property for handling projects which are using TargetFrameworks in the csproj. That was exactly my case. The release notes described how to handle complex scenarios (where documentation should be different depending on TargetFramework) but mine was simple, so I just needed to add the property with one of values from csproj.

{
  "metadata": [
    {
      ...
      "properties": {
        "TargetFramework": "netstandard1.6"
      }
    }
  ],
  ...
}

This resulted in successful build and filled up wwwroot/api with HTML files.

Adding minimum static content

The documentation is not quite usable yet. It's missing landing page and top level Table of Contents. The Table of Contents can be handled by adding toc.yml or toc.md to the content, DocFx will render it as the top navigation bar. I've decided to go with markdown option.

# [Introduction](index.md)

# [API Reference](/api/Lib.AspNetCore.Mvc.JqGrid.Helper.html)

As you can guess index.md is the landing page, it should also be added to the content.

{
  ...
  "build": {
    "content": [
      {
        "files": [
          "api/*.yml",
          "index.md",
          "toc.md"
        ]
      }
    ],
    ...
  }
}

Adjusting metadata

The last touch is adjusting some metadata values like title, footer, favicon, logo etc. The favicon and logo require some special handling as they contain paths to resources. In order for resource to be accessible by DocFx it has to be added to dedicated resource section inside build.

{
  ...
  "build": {
    ...
    "resource": [
      {
        "files": [
          "resources/svg/logo.svg",
          "resources/ico/favicon.ico"
        ]
      }
    ],
    ...
    "globalMetadata": {
      "_appTitle": "Lib.AspNetCore.Mvc.JqGrid",
      "_appFooter": "Copyright © 2016 - 2017 Tomasz Pęczek",
      "_appLogoPath": "resources/svg/logo.svg",
      "_appFaviconPath": "resources/ico/favicon.ico",
      "_disableBreadcrumb": true,
      "_disableAffix": true,
      "_disableContribution": true
    }
  }
}

This has satisfied my initial requirements. The further static content can be added exactly the same as index.md has been added, while look and feel can be customized with templates.

Google's Certificate Transparency project is an open framework for monitoring and auditing SSL certificates. The goal behind the project is detection of mis-issued/malicious certificates and identification of rogue Certificate Authorities. In October 2016 Google has announced that Chrome will require compliance with Certificate Transparency. The initial date of enforcing this requirement has been set to October 2017 and later changed to April 2018.

Back in December 2016 the draft of Expect-CT Extension for HTTP has been submitted and quickly followed by call for adoption. The draft introduces the Expect-CT response header which will allow hosts to either test or enforce the Certificate Transparency policy. The draft has been adopted and currently is in IETF stream, while the header support is already in development for Chrome (The Security Engineering team at Mozilla has also expressed interest in providing the support in Firefox in 2017).

In this post I'm going to show how the Expect-CT response header (and its reporting capabilities) can be set up for ASP.NET Core application, so when the browser support comes it can be used for testing compliance with Certificate Transparency policy.

Setting the Expect-CT response header

The Expect-CT header has three directives defined. The only required one is max-age which tells the browser for how long it should treat the host as known Expect-CT host. The optional directives are report-uri (which can be used to provide absolute URI to which the violation report will be send) and enforce (which presence results in refusing connections in case of violation). Specification also requires the header to be delivered only over secure connection.

Assuming one would want to set up a simple report-only scenario, this can easily be done with a simple anonymous middleware.

public void Configure(IApplicationBuilder app)
{
    ...

    app.Use((context, next) =>
    {
        if (context.Request.IsHttps)
        {
            context.Response.Headers.Append("Expect-CT",
                $"max-age=0; report-uri=\"https://example.com/report-ct\"");
        }

        return next.Invoke();
    });

    ...
}

But just setting up the header has little value, the real goal is to receive the information when something is wrong.

Receiving violation report

If the report-ui directive has been specified as part of Expect-CT header and the violation occurs the client should send the report. The report should be sent using POST request with content type of application/expect-ct-report. This means that middleware aiming at receiving the violation report should check for those conditions.

public class ExpectCtReportingMiddleware
{
    private const string _expectCtReportContentType = "application/expect-ct-report";

    private readonly RequestDelegate _next;

    public ExpectCtReportingMiddleware(RequestDelegate next)
    {
        _next = next ?? throw new ArgumentNullException(nameof(next));
    }

    public async Task Invoke(HttpContext context)
    {
        if (IsExpectCtReportRequest(context.Request))
        {
            // TODO: Get the report from request

            context.Response.StatusCode = StatusCodes.Status204NoContent;
        }
        else
        {
            await _next(context);
        }
    }

    private bool IsExpectCtReportRequest(HttpRequest request)
    {
        return HttpMethods.IsPost(request.Method)
            && (request.ContentType == _expectCtReportContentType);
    }
}

The report should be send as JSON with expect-ct-report top level property containing the violation details. The details contain information like hostname, port, failure time, Expect-CT Host expiration time, certificates chains and SCTs (I will skip certificates chains and SCTs here as they are hard without real-life examples). Sample report could look like below.

{
    "expect-ct-report": {
        "date-time": "2017-05-05T12:45:00Z",
        "hostname": "example.com",
        "port": 443,
        "effective-expiration-date": "2017-05-05T12:45:00Z",
        ...
    }
}

Which can be represented with following class.

public class ExpectCtViolationReport
{
    public DateTime FailureDate { get; set; }

    public string Hostname { get; set; }

    public int Port { get; set; }

    public DateTime EffectiveExpirationDate { get; set; }
}

The middleware needs to parse the request body into object of this class. The Request.Body is available as stream so using JsonTextReader form Json.NET seems to be a reasonable approach.

public class ExpectCtReportingMiddleware
{
    ...

    public async Task Invoke(HttpContext context)
    {
        if (IsExpectCtReportRequest(context.Request))
        {
            ExpectCtViolationReport report = null;

            using (StreamReader requestBodyReader = new StreamReader(context.Request.Body))
            {
                using (JsonReader requestBodyJsonReader = new JsonTextReader(requestBodyReader))
                {
                    JsonSerializer serializer = new JsonSerializer();
                    serializer.Converters.Add(new ExpectCtViolationReportJsonConverter());
                    serializer.DateFormatHandling = DateFormatHandling.IsoDateFormat;

                    report = serializer.Deserialize<ExpectCtViolationReport>(requestBodyJsonReader);
                }
            }

            context.Response.StatusCode = StatusCodes.Status204NoContent;
        }
        else
        {
            await _next(context);
        }
    }

    ...
}

The ExpectCtViolationReportJsonConverter takes care of going inside the expect-ct-report property and deserializing the object. I'm skipping its code here, it can be found on GitHub.

The next thing which is needed is propagation of the violation report outside of the middleware. For this purpose a simple service will be sufficient.

public interface IExpectCtReportingService
{
    Task OnExpectCtViolationAsync(ExpectCtViolationReport report);
}

The middleware shouldn't make any assumptions about this service lifetime, so it is safer to grab it directly from HttpContext.RequestServices when needed instead of relying on constructor injection (which would result in service being treated as singleton by middleware).

public class ExpectCtReportingMiddleware
{
    ...

    public async Task Invoke(HttpContext context)
    {
        if (IsExpectCtReportRequest(context.Request))
        {
            ExpectCtViolationReport report = null;

            ...

            if (report != null)
            {
                IExpectCtReportingService expectCtReportingService =
                    context.RequestServices.GetRequiredService<IExpectCtReportingService>();

                await expectCtReportingService.OnExpectCtViolationAsync(report);
            }

            context.Response.StatusCode = StatusCodes.Status204NoContent;
        }
        else
        {
            await _next(context);
        }
    }

    ...
}

The only thing missing is an implementation of IExpectCtReportingService. Below is a very simple one which uses ASP.NET Core logging API.

public class LoggerExpectCtReportingService : IExpectCtReportingService
{
    private readonly ILogger _logger;

    public LoggerSecurityHeadersReportingService(ILogger<IExpectCtReportingService> logger)
    {
        _logger = logger;
    }

    public Task OnExpectCtViolationAsync(ExpectCtViolationReport report)
    {
        _logger.LogWarning("Expect-CT Violation: Failure Date: {FailureDate} UTC"
            + " | Effective Expiration Date: {EffectiveExpirationDate} UTC"
            + " | Host: {Host} | Port: {Port}",
            report.FailureDate.ToUniversalTime(),
            report.EffectiveExpirationDate.ToUniversalTime(),
            report.Hostname,
            report.Port);

        return Task.FromResult(0);
    }
}

Now everything can be wired up as part of pipeline configuration.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddTransient<IExpectCtReportingService, LoggerExpectCtReportingService>();
        ...
    }

    public void Configure(IApplicationBuilder app, ILoggerFactory loggerFactory)
    {
        loggerFactory.AddConsole();
        loggerFactory.AddDebug();

        ...

        app.Use((context, next) =>
        {
            if (context.Request.IsHttps)
            {
                context.Response.Headers.Append("Expect-CT",
                    $"max-age=0; report-uri=\"https://example.com/report-ct\"");
            }

            return next.Invoke();
        })
        .Map("/report-ct", branchedApp => branchedApp.UseMiddleware<ExpectCtReportingMiddleware>());

        ...
    }
}

Ready for the future

This post talks about things which are not quite there yet, but are coming and we should be prepared. My personal suggestion for when the support for header arrives would be to set up Expect-CT header in report-only mode and then slowly upgrade to enforcing.

I've made the functionality described here available as part of my security side project through SecurityHeadersMiddleware, ExpectCtReportingMiddleware and ISecurityHeadersReportingService.

ASP.NET Core comes with ready to use Cross Origin Resource Sharing support in form of Microsoft.AspNetCore.Cors package. The usage is very straightforward, you just need to register the services, configure the policy and enable CORS either with middleware (for whole pipeline or specific branch), filter (globally for MVC) or attribute (at MVC controller/action level). This is all nicely described in documentation. But what if there is a need to reconfigure the policy at runtime?

Let's assume that there is an application which contains two APIs. One is considered "private" so only other applications from the same suite can use it, while the second is "public" and client administrator should be able to configure it so it can be used with any 3rd party application.

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddCors(options =>
        {
            options.AddPolicy("Private", builder =>
            {
                builder.WithOrigins("http://appone.suite.com, http://apptwo.suite.com");
                ...
            });

            options.AddPolicy("Public", builder =>
            {
                // Apply "public" policy (based on information read from storage etc.)
                ...
            });
        })
        ...;
    }

    ...
}

The initial configuration of policy is not an issue, the problem is the part when admin decides to change the policy. The whole application shouldn't require a restart for changes to take effect, so the policy needs to be accessed and reconfigured.

How the policy can be accessed

The initialization code shows that the policies are being added to the CorsOptions. Internally the IServiceCollection.AddCors is plugging the options into ASP.NET Core configuration framework by calling IServiceCollection.Configure. This means that they can be retrieved with help of Dependency Injection as IOptions and considered a singleton. This is enough information to start building a service which will help with accessing the policy.

public class CorsPolicyAccessor : ICorsPolicyAccessor
{
    private readonly CorsOptions _options;

    public CorsPolicyAccessor(IOptions<CorsOptions> options)
    {
        if (options == null)
        {
            throw new ArgumentNullException(nameof(options));
        }

        _options = options.Value;
    }
}

From this point it's easy. The CorsOptions exposes GetPolicy method and DefaultPolicyName property which can be used for exposing access to the policy.

public class CorsPolicyAccessor : ICorsPolicyAccessor
{
    ...

    public CorsPolicy GetPolicy()
    {
        return _options.GetPolicy(_options.DefaultPolicyName);
    }

    public CorsPolicy GetPolicy(string name)
    {
        return _options.GetPolicy(name);
    }
}

Now the new service can be registered (preferably after the AddCors call).

public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddCors(options =>
        {
            ...
        })
        .AddTransient<ICorsPolicyAccessor, CorsPolicyAccessor>()
        ...;
    }

    ...
}

Exposing the policy with MVC

With help of just created ICorsPolicyAccessor service and Dependency Injection the CORS policy can now be reconfigured at runtime for example from ASP.NET Core MVC controller. For starters let's create an action and view which lists all the origins within the Public policy.

public class OriginsController : Controller
{
    private readonly ICorsPolicyAccessor _corsPolicyAccessor;

    public OriginsController(ICorsPolicyAccessor corsPolicyAccessor)
    {
        _corsPolicyAccessor = corsPolicyAccessor;
    }

    [AcceptVerbs("GET")]
    public IActionResult Manage()
    {
        return View(new OriginsModel(_corsPolicyAccessor.GetPolicy("Public").Origins));
    }
}
public class OriginsModel
{
    private readonly IList<string> _origins;

    public IEnumerable<string> Origins => _origins;

    public OriginsModel(IList<string> origins)
    {
        _origins = origins;
    }
}
@model OriginsModel
<!DOCTYPE html>
<html>
...
<body>
    <div>
        <ul>
            @foreach(var origin in Model.Origins)
            { 
                <li>@origin</li>
            }
        </ul>
    </div>
</body>
</html>

Navigating to the URL pointing at the action should result in listing all the origins. This can be easily extended with capabilities of adding and removing origins. First the View Model should be changed so the list of origins can be used to generate a select element.

public class OriginsModel
{
    ...

    public List<SelectListItem> Origins => _origins.Select(origin => new SelectListItem
    {
        Text = origin,
        Value = origin
    }).ToList();

    ...
}

This allows for adding some inputs and forms to handle the operations (this could be done a lot nicer with AJAX but I want to keep things simple for the sake of clarity).

@model OriginsModel
<!DOCTYPE html>
<html>
...
<body>
    <div>
        <ul>
            @foreach(var origin in Model.Origins)
            { 
                <li>@origin.Text</li>
            }
        </ul>
        <form asp-action="Add" method="post">
            <fieldset>
                <legend>Adding an origing</legend>
                <input type="text" name="origin" />
                <input type="submit" value="Add" />
            </fieldset>
        </form>
        <form asp-action="Remove" method="post">
            <fieldset>
                <legend>Removing an origing</legend>
                <select name="origin" asp-items="Model.Origins"></select>
                <input type="submit" value="Remove" />
            </fieldset>
        </form>
    </div>
</body>
</html>

Last thing to do is handling the Add and Remove actions. I'm going to use the PRG Pattern here which should allow for clear responsibilities separation.

public class OriginsController : Controller
{
    ...

    [AcceptVerbs("POST")]
    public IActionResult Add(string origin)
    {
        _corsPolicyAccessor.GetPolicy("Public").Origins.Add(origin);

        return RedirectToAction(nameof(Manage));
    }

    [AcceptVerbs("POST")]
    public IActionResult Remove(string origin)
    {
        _corsPolicyAccessor.GetPolicy("Public").Origins.Remove(origin);

        return RedirectToAction(nameof(Manage));
    }
}

Testing this will show that indeed changes are being picked up immediately (although there is small risk of a race involved). This simple demo shows how easy it is to reconfigure part of policy. With this approach any of CorsPolicy public properties can be changed.

Long time ago (according to the repository back in 2011) I've made first version of Lib.Web.Mvc public. The initial functionality was a strongly typed helper for jqGrid. In later versions additional functionalities like Range Requests action result, CSP action attribute and helpers , HSTS attribute or HTTP/2 Server Push with Cache Digest attribute and helpers has been added, but jqGrid support still remained the biggest one. So when ASP.NET Core was getting closer to RTM this issue popped up. Now (14 months later) I'm releasing Lib.AspNetCore.Mvc.JqGrid version 1.0.0. As this is not just a port (I took the opportunity to redesign few things) I've decided to describe the key changes.

Packages organization

The functionality has been split into four packages:

  • Lib.AspNetCore.Mvc.JqGrid.Infrastructure - Classes, enumerations and constants representing jqGrid options.
  • Lib.AspNetCore.Mvc.JqGrid.Core - The core serialization and deserialization functionality. If you prefer to write your own JavaScript instead of using strongly typed helper, but you still want some support on the server side for requests and responses this is what you want.
  • Lib.AspNetCore.Mvc.JqGrid.DataAnnotations - Custom data annotations which allow for providing additional metadata when working with strongly typed helper.
  • Lib.AspNetCore.Mvc.JqGrid.Helper - The strongly typed helper (aka the JavaScript generator).

The split was driven mostly by two use cases which has been often raised. One is separating the (view) models from the rest of the application (for example independent assembly). The only package needed now in such cases is Lib.AspNetCore.Mvc.JqGrid.DataAnnotations which doesn't have any ties to ASP.NET Core. The second use cases is not using the JavaScript generation part, just the support in response and request serialization. That functionality has been separated as well in order to minimize footprint for such scenario.

Usage basics and demos

The helper in version for ASP.NET MVC was an independent class which needed to be initialized (typically in the view) and than could be used to generate the JavaScript and HTML (very similar to System.Web.Helpers.WebGrid). This has been changed, the JavaScript and HTML generation is exposed through IHtmlHelper extensions methods (JqGridTableHtml, JqGridPagerHtml, JqGridHtml, and JqGridJavaScript) which take JqGridOptions instance as parameter. This means that view code can be simplified to this (assuming all needed scripts and styles have been referenced):

@Html.JqGridHtml(gridOptions)
<script>
    $(function () {
        @Html.JqGridJavaScript(gridOptions)
    });
</script>

The JqGridOptions instance can be created anywhere in application, as it sits in Lib.AspNetCore.Mvc.JqGrid.Infrastructure there is even no reference to ASP.NET Core required. When it comes to the controller code, not much has changed. The Lib.AspNetCore.Mvc.JqGrid.Core provides classes like JqGridRequest, JqGridResponse or JqGridRecord with appropriate binders and converters which are being automatically used.

public IActionResult Characters(JqGridRequest request)
{
    ...

    JqGridResponse response = new JqGridResponse()
    {
        ...
    };

    ...

    return new JqGridJsonResult(response);
}

There is a demo project available on GitHub which contains samples of key feature areas with and without helper usage.

Supported features and roadmap

This first version doesn't support all the features which Lib.Web.Mvc did, if I would want to achieve that I don't know when would I release. I've chosen the MVP based on what has been the most common subject for discussions and questions in the past. This gives following list of areas:

  • Formatters
  • Footer
  • Paging
  • Dynamic scrolling
  • Sorting
  • Single and advanced searching
  • Form and cell editing
  • Grouping
  • Tree grid
  • Subgrids

This is of course not the end. I will soon start setting roadmap for next releases. This is something that everybody can have their say about by creating or reacting to issues.

In general I'm open to any form of feedback (tweets, emails, issues, high fives, donations). I will keep working on this project as long as it will have value for anybody and I'll try to answer any questions.

Older Posts