Idea: How do I...? Or creating recipes.

NOTE: This is a draft idea. My rambling is included.

My mind’s been stuck on an idea recently.

We have some very good tools in our coding community to help us find solutions to problems. Tools like StackOverflow are simply too amazing to just move aside.

What’s been missing, I think, are cookbooks. Or rather, list of recipes that includes snippets, solutions, explanation and community driven details.

Attempt at mocking

Sample mockup

Recipe Model

Recipes should be centered around a few technologies and could be tagged for more precise scenarios. Let’s say I want to integrate Azure Application Insights to Angular JS.

What is popping in my head is:

How do I integrate application insights into AngularJS?

The model of said recipes could look like this:

1
Name: Integrate Application Insights into Angular JS
Tags: application insights, logging, exception
Technology: azure, angularjs, javascript
Description: [...] (markdown? html? bbformat?)
Snippets: [markdown? other?]

Directions

One thing I definitely want is to make sure that this is hosted on Azure. Most recipes will never change so it’s the perfect moment to go static. Having the description and the snippets in markdown would allow to quickly edit, and validate recipes.

As for the recipes themselves, I find that every client has their own recipes. Just like your mom’s spaghetti, yours is always better than anyone else’s but you do change it a bit overtime when encountering other recipes.

If this software is ever created, it should allow us to create a recipe book per user and allow us to share those recipes with others.

Ramblings

I would love to see some AngularJS/ReactJS cookbooks. Maybe even some Microsoft Azure cookbooks to share snippets on how to work with Storage, Service Bus, etc.

Integrating in Visual Studio would be awesome. Especially with snippets that are just pure web.config or *.json modifications.

Note

This is just an idea. If you run with it, meh. Go ahead. Ideas are worth nothing. Execution is the measure of success.

How NOT to copy one BlockBlob to another in Azure

So I came onto this piece of code that was trying to replicate a BlockBlob from one place to another.

1
2
3
4
5
6
7
8
destinationBlob.Properties.ContentType = sourceBlob.Properties.ContentType;

using (var stream = new MemoryStream())
{
sourceBlob.DownloadToStream(stream);
stream.Seek(0, SeekOrigin.Begin);
destinationBlob.UploadFromStream(stream);
}

The problem

First, it will download the whole block locally to re-upload it.

That is pretty awesome but what if your file is 500Mb? Yeah. It’s completely loaded into memory and your process memory usage will explode. Worse, if your process is 32bits, you might actually run into OutOfMemoryException sooner rather than later.

The solution

Azure already provides a way to copy data from one blob to another. All this code was replaced by this single line of code.

1
newBlob.StartCopy(oldBlob);

If you don’t know how to do something in Azure, looking at the API or asking around might just save you a few hours of troubleshooting.

Taking your software on a static ride

If you haven’t seen the change recently, I’ve changed, yet again, my blogging engine. However, this isn’t a post about blogging or engines.

It’s a post about static content.

What was there before

Previously, I was on the MiniBlog blog engine written by Mads Kristensen.

Despite a few issues I’ve found with it, the main argument for it is Simple, flexible and powerful. It’s written on the GitHub repository after all.

What is in place now

I currently run something called Hexo. It’s a NodeJS static blog generator with support for themes and other modules if you are interested in customizing it.

Why go static?

There is an inherent simplicity in having pure .html files as your content and simple folders as your routing engine. Over that, most web servers can handle local files very efficiently. Over that, they can output the proper caching HTTP headers without the need to reach other files or a database.

As for flexible, well it’s HTML files. If I want images or any other files, I just push include them in the hierarchy of the blog.

You only need to configure your host a bit if you want to benefit from cached files.

Caching

The web.config for this site contains the following:

1
2
3
4
5
6
7
<configuration>
<system.webServer>
<staticContent>
<clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="365:00:00" />
</staticContent>
</system.webServer>
</configuration>

You can find similar instructions for nginx or other servers/host pretty much everywhere.

Azure & GitHub

The first step is to create an Azure Web App. If you have a custom domain, you will need to bump your plan to at least Shared. But you are just testing it out, it works with the Free plan.

  • Once created, go into your Web application and click on settings.
  • Navigate to Deployment source
  • Choose source
  • Pick GitHub and setup your authorization and pick a project/branch

This will synchronize your GitHub branch with your website. Every time there’s a change on your branch, a new deployment will start.

The last piece of the puzzle is to actually generate your site with your static site generator. In my scenario, I needed to run hexo generate. When going through this process, you can inject a custom script into your repository to run custom commands while deploying. Mine can be found right here.

Result?

The final result is that I have a pure HTML/JS/CSS site that is running on a shared hosting.

As for all I know, it could run off of a Raspberry Pi or an Arduino plugged to the internet and nobody would know the difference. If my website ever becomes too popular for its own good, it will take a lot of visitors to slow it down. It is, after all, only serving static files.

What changed in the architecture?

So what is a blog engine doing? It’s reading post metadata (content, publishing date, authors, etc.) and converting them to HTML pages that the end-user is going to see. This blog engine is running off a web framework (ASP.NET MVC, Django, etc.) and this framework is running off a core language/runtime or something else. Finally, it is handing the content to the web server/host and this one is transferring it to the end-user.

What we’ve done is pre-generate all the possible results that the blog engine could ever produce and store them in static files (HTML). We’ve converted Dynamic to Static.

We had to drop a few functionality to get to this point like comments, live editing, pretty management UI… in my case, this is worth it for me. I’m technologically savvy enough to understand how to create a new post in my system without slowing me down. Like every architectural decision, it’s all a matter of tradeoff.

When should I use static content?

When data doesn’t need to change too often. Most CMS, blogs and even e-commerce product description fit this bill. This could also be when you don’t want to expose your database to external servers and instead just push content files.

You should also use static content if you need to host files on a multitude of different server OSes.

But what to do with less tech-savvy users?

The same process can still be reused. Maybe not with a GitHub/Azure workflow.

However, you could always use the Azure Blob Storage and rewrite, using UrlRewrite, your URLs to the blob storage directly.

1
2
3
4
5
6
7
8
<rewrite>
<rules>
<rule name="imagestoazure">
<match url="images/(.*)" />
<action type="Redirect" url="https://??????.vo.msecnd.net/images/{R:1}" />
</rule>
</rules>
</rewrite>

That way, you can offer a rich UI to your user and re-generate your content when it change and upload it asynchronously to the blob storage. Your original website on Azure? No need to even take it down. The content change and the website keeps on forwarding requests.

Conclusion

Yes, there is more moving pieces. But my belief is that it’s only a tooling problem we have now. We have tools that were built to work with dynamic web sites. We’re only starting to build them for static content.

Worse, we are thinking in MVC, cshtml and databases instead of just thinking about what the end client truly want. A web page with some content that hasn’t changed in a long time.

Creating and retrieving snapshots from Azure Blob Storage files

When uploading files to Azure Storage, you might be interested in keeping the version of the file that is already existing.

Are you to download those files locally? Maybe even back them up? Well, yes if they need to be kept for a long time, but another solution is to just make a snapshot of them.

Azure Blob Storage allows you to flag the current state of a blob to allow you to return to it later with all of its content available.

Pre-requisite

1
Install-Package WindowsAzure.Storage

Creating some random data First

First, let’s create some data with which to work.

1
2
3
4
5
6
var blobClient = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("test");
container.CreateIfNotExists();

var blob = container.GetBlockBlobReference("textfile.txt");
blob.UploadFromFile("MyTextFile.txt");

All content can be lost if the file that I upload is uploaded in the same BlockBlob.

What if my app require that I version those files?

Making a snapshot

Making a snapshot is as easy as adding this line after uploading:

1
blob.Snapshot();

So now I can run this code and the content of my previous file will still be accessible.

1
2
var blob = container.GetBlockBlobReference("textfile.txt");
blob.UploadFromFile("ADifferentTextFile.txt");

Retrieving a snapshot

Since more than one snapshot can be taken, we have to retrieve them as a list.

Here’s how to query for all snapshots of a specific file and output its content directly on screen.

1
2
3
4
5
6
7
8
9
10
11
12
13
IEnumerable<CloudBlockBlob> snapshots = container.ListBlobs(blobListingDetails: BlobListingDetails.Snapshots, useFlatBlobListing:true)
.Cast<CloudBlockBlob>()
.Where(x=> x.IsSnapshot && x.Name == "textfile.txt")
.ToList();

foreach (var item in snapshots)
{
using (var sr = new StreamReader(item.OpenRead()))
{
Console.WriteLine($"Snapshot of {item.Name} taken at date: {item.SnapshotTime.Value.ToLocalTime()} :");
Console.WriteLine(sr.ReadToEnd());
}
}

Restoring a snapshot over an existing file

What if you just need to do the revert and not bother about the actual content of the file?

You can easily copy a snapshot to its current blob using something along this line:

1
2
3
var blob = container.GetBlockBlobReference("textfile.txt");
var wantedSnapshot = snapshots.First(); // or any condition that you wish
blob.StartCopy(wantedSnapshot);

Deleting a blob with snapshots

Once a blob have snapshots, you will not be able to delete it by just calling blob.DeleteIfExists(). This is a safety net in case of accidental deletion.

If you are sure that you want to delete a blob with all its snapshot, you will need to use the proper overload.

1
2
3
4
5
// deletes blob and snapshots
blob.DeleteIfExists(DeleteSnapshotsOption.IncludeSnapshots);

//delete snapshots only
blob.DeleteIfExists(DeleteSnapshotsOption.DeleteSnapshotsOnly);

Considerations

Cost

First, snapshots aren’t free. You pay for data that you are using. Each snapshot that is equal to the source data is free. After all, nothing changed. If data is changed, only the delta will be taken into consideration when billing you.

However, if data from a BlockBlob is changed by using UploadFile, UploadText, UploadStream or UploadByteArray, it will cause the whole blob to be considered changed. Even if the content is completely equal to the original blob. So be careful.

For more detailed information considering billing checkout this article about how snapshots accrue charges.

Formatting numbers in AngularJS with Numbrojs

When building an application, we always end up wanting to format our data in a certain way.

The issue happens when we try to integrate it with frameworks and their own opinionated way of coding.

AngularJS include a very interesting concept called Filter that allows us to integrate some of those frameworks really easily.

What’s a filter?

A filter in AngularJS allows you to pipe data to a function and return something.

A good example is taking a raw number and formatting to display it as currency or taking a date and showing it off in the user’s region.

For more info about filters, check out Angular’s guide to filters.

What is Numbro?

Numbro is number formatting library that allows you to format/unformat numbers in different formats. Whether it’s percentages, time, currency, or even bytes, Numbro got you covered.

Unify both

Numbro, when added to your project, will add itself to the global scope. So how do you integrate it with Angular?

Using filters

My favorite way to use it is to create a filter. That allows me to set general format to all my data and have a single point of change.

Here’s a few examples

Byte formatting

1
2
3
4
5
6
angular.module('FilterModule', [])
.filter('bytes', function () {
return function (bytes) {
return numbro(bytes).format('0.0 b');
};
})
1
2
3
<div>
<span>{{ file.size | bytes}}</span>
</div>

Currency formatting

1
2
3
4
5
6
angular.module('FilterModule', [])
.filter('currency', function () {
return function (money) {
return numbro(money).formatCurrency();
};
})
1
2
3
<div>
<span>{{ invoice.amount | currency}}</span>
</div>

Percentage formatting

1
2
3
4
5
6
angular.module('FilterModule', [])
.filter('percentage', function () {
return function (number) {
return numbro(number).format('0 %');
};
})
1
2
3
<div>
<span>{{ invoice.paidAmount / invoice.amount | percentage}}</span>
</div>

Formatting data the easy way

Of course that’s just for numbers. You could do the same with dates with momentjs.

Just another reason why filters can make your life very easy for you as a developer.

More back pedaling on .NET Core

They say great science is built on the shoulders of giants. Not here. At Aperture, we do all our science from scratch. No hand holding. - Cave Johnson

Removing project.json was explained by the sheer size of refactoring every other projects to unify in one model. It was a justified explanation.

After reading that the .NET team is removing grunt/gulp from the project templates, I’m left to wonder what is the motivation behind it. Apparently, some users had issues with it and it had to be pulled. No more details were given.

In fact, they are pulling back the old bundler/minifier into the fray to keep the bundling/minifying feature.

All this so we can do dotnet bundle without requiring other tools. Everything will be unified once more. No need for external tooling or node. Everything will be Microsoft tools.

Now to explain to my client why we are using node to automate our workflow since Microsoft definitely won’t be including it in its templates. It really makes you think twice about offering directions to clients.

Logging detailed DbEntityValidationException to AppInsights

I’ve been having some issue logging exception to AppInsights when DbEntityValidationException were thrown.

AppInsight would show me the exception with all associated details, but would not show me what validations were faulty.

It just happens that the exception will not be deserialized. They will take the Message, the StackTrace and a few other default properties, but that’s it.

So how do I get the content of EntityValidationErrors? Manually of course!

Retrieving validation errors

In my scenario, I can do a simple SelectMany since I know I’m dealing with just one entity at a time. Depending on your scenario, you should consider inspecting the Entity property instead of just using the ValidationErrors.

Here’s what I did:

1
2
3
4
5
6
7
8
9
10
11
var telemetryClient = new TelemetryClient();
try
{
// do stuff ...
}
catch(DbEntityValidationException ex)
{
Dictionary<string, string> properties = ex.EntityValidationErrors.SelectMany(x => x.ValidationErrors)
.ToDictionary(x => x.PropertyName, x => x.ErrorMessage);
telemetryClient.TrackException(ex, properties);
}

Here’s how to handle many entities:

1
2
3
4
5
6
7
foreach (var validationError in dbException.EntityValidationErrors)
{
var properties = validationError.ValidationErrors.ToDictionary(x => x.PropertyName,
x => x.ErrorMessage);
properties.Add("_EntityType", validationError.Entry.Entity.GetType().FullName);
telemetryClient.TrackException(ex, properties);
}

The only caveat would be that the exception would be logged as many time as you have invalid entities.

On that, back to tracking more exceptions!

It's the perfect time for breaking changes in .NET

Breaking changes is something that happens all the time in the Open Source Community.

We see people going from Grunt to Gulp because the new way of doing things is better. In the open-source world, project live or dies based on their perceived value.

In the .NET World, things are more stable. Microsoft ensures backward compatibility on their language and frameworks for years. People get used to seeing the same technology around and, with this, no reasons to change things since it’s going to be supported for sometimes decades.

Microsoft adoption of OSS practices, however, changed their approach to software. To become faster, things needed to be broken down and rebuilt. Changes needed to happen. To build a framework ready to support to fast-paced change of tomorrow, things we were used to are being ripped apart and rebuilt from scratch.

Not everything was removed. Some good concepts were kept, but it opened the door to changes.

Being open to change

This is the world we live in. I don’t know if it’s Microsoft’s direction, but we need to be able to stay open to change even if it breaks our stuff. Microsoft is a special island where things stay alive for way longer than sometimes they should.

In ASP.NET Core, they went so far that they had to revert some changes to be able to deliver.

Good or bad?

Here’s my opinion on the matter. Things that change too quickly can be bad for your ecosystem because people can’t find their footing and spend more time finding out what broke than delivering value.

Change is necessary. Otherwise, you end up like Java and have this monstrosity when handling date. C# isn’t too different in the collection department either.

What the future will look like

I don’t know what the team is planning. What I hope is that dead part of the framework are retired as newer versions of the framework are released.

It’s time to move the cheese and throw the dead weight overboard. Otherwise, you’re just dragging it along for the next 10 years.

Tracking your authenticated users with Azure AppInsights

Installing AppInsights is very easy to setup in your web application. Taking a few minutes to set few additional things however could really benefit you.

Once your initial AppInsights script has been initialized, if you can retrieve your authenticated userId you can add it to each request easily.

Simply add this on every page load where a user is authenticated:

1
2
var userId = '[email protected]';
appInsights.setAuthenticatedUserContext(userId);

This will create a cookie that will track your authenticated user on each event/page view/request.

The only bug left to iron out is when 2 users alternate session on the same browser without closing the browser. See, the cookie has a Session lifetime. Most of the time, it will be ok. But let’s keep our data clean.

Every time a user is considered un-authenticated or that he is logging out, include the following:

1
appInsights.clearAuthenticatedUserContext();

This will ensure that your authenticated context (the cookie) is cleared and no misattributed events are tacked on a user.

Integrating AppInsights Instrumentation Key in an AngularJS Application

So I recently had to integrate AppInsights in an AngularJS application.

The issue I faced was that I didn’t want to pre-create my AppInsights instance and instead, rely on Azure Resource Manager to automatically instantiate them. In my previous post, I showed you how to move the Instrumentation Key directly into AppSettings. How do you get it on the client?

You could use MVC and render it directly. But thing is, the application’s architecture is that… there’s no C# running on this side of the project. No .NET at all. So how do I keep my dependencies low while still taking the AppSettings to the client?

Http Handlers

Http Handlers was first introduced at the very beginning of .NET. They are light weight, have no dependencies and are very fast.

Exactly what we need for our basic need.

Here’s the code that I used:

1
2
3
4
5
6
7
8
9
10
11
12
public class InstrumentationKeyHandler : IHttpHandler
{
public bool IsReusable => true;

public void ProcessRequest(HttpContext context)
{

var setting = ConfigurationManager.AppSettings["InstrumentationKey"];
context.Response.Clear();
context.Response.ContentType = "application/javascript";
context.Response.Write($"(function(){{window.InstrumentationKey = \'{setting}\'}})()");
}
}

Here’s how you configure it in your web.config:

1
2
3
4
5
6
7
8
<?xml version="1.0"?>
<configuration>
<system.webServer>
<handlers>
<add name="InstrumentationKey" type="MyNamespace.InstrumentationKeyHandler, MyAssembly" resourceType="Unspecified" path="InstrumentationKey.ashx" verb="GET" />
</handlers>
</system.webServer>
</configuration>

And how you use it :

1
2
3
4
5
6
7
8
9
<script src="InstrumentationKey.ashx"></script>
<script type="text/javascript">
if (window.InstrumentationKey) {
console.debug('InstrumentationKey found.');
//TODO: Insert AppInsights code here.
} else {
console.debug('InstrumentationKey missing.')
}
</script>

Why does it work?

We are relying on the browser’s basic functionality when loading scripts. All scripts will be downloaded asynchronously initially but they will always be ran sequentially.

In this scenario, we pre-load our Instrumentation Key inside a global variable and use it in the next script tag.

Server-less Office with O365

I recently had a very interesting conversation with a colleague of mine about setting up basic services for small/medium businesses.

His solution was basically to configure a server on-site and offer Exchange, DNS, Active Directory, etc. in a box.

We had some very interesting discussion but let me bring you what I suggested to him to save him time and his customer money.

Note: I’m not an O365 expert. I’m not even an IT Pro. Just a very passionate technologist.

Replacing Exchange with O365

First, Exchange is a very complicated beast to configure. Little mistakes are very time consuming to debug.

O365 comes with everything up and running. Short of configuring Quotas and a few other things, it’s already up and running.

But the most important point for someone having a client on O365 is that when there’s an issue anywhere, you don’t have to get on-site or VPN/RDP to the machine. You go to the online dashboard, you debug and you deliver value.

You save on time and help your client better use its resources.

DNS and Active Directory

When creating an O365 service, it basically ships with its own Active Directory. Windows 10 allows you to join your local machine to an Azure Active Directory domain.

No more need for a Domain Controller client-side.

What is needed?

Buy a Wireless router. Plug it in your client’s office. Once all the machines have joined the domain, they can all collaborate with each other.

If somethings goes wrong, replace the router. Everything else is cloud based.

  • File Sharing? OneDrive is included
  • Email? O365
  • Active Directory? On Azure

Why go that way?

Let’s be clear. The job is going to change. Small clients never needed the infrastructure we pushed on them earlier. At best, the server was idle and at worst that server ended up in a closet overheating or being infested by pest (not joking). There just wasn’t anything like the Cloud to provide for them.

By helping them lighten their load, we free up our own time to serve more clients and offer them something different.

If you are not doing this now, somebody else is going to offer your client that opportunity. After the paperless office, here’s the server-less office.

Importing your AppInsights Instrumentation Key directly into your AppSettings

So I had an application recently that needed to use AppInsights. The application was deployed using Visual Studio Release Management so everything needed to be deployed using the Azure Resource Management template.

Problem

You can’t specify an instrumentation key. It is created automatically upon creation.

Solution

Retrieve the key directly from within the template.

Here’s a trimmed down version of an ARM template that does exactly this.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": { },
"variables": { },
"resources": [
{
"name": "[variables('WebAppName')]",
"type": "Microsoft.Web/sites",
"location": "[resourceGroup().location]",
"apiVersion": "2015-08-01",
"dependsOn": [],
"tags": { },
"properties": { },
"resources": [
{
"name": "appsettings",
"type": "config",
"apiVersion": "2015-08-01",
"dependsOn": [],
"tags": {},
"properties": {
"InstrumentationKey": "[reference(resourceId('Microsoft.Insights/components', variables('appInsightName')), '2014-04-01').InstrumentationKey]"
}

}

]
},

{
"name": "[variables('appInsightName')]",
"type": "Microsoft.Insights/components",
}
]
}

Key piece

See that little AppSettings named InstrumentationKey? That’s where the magic happens.

Your instrumentation key is now bound to your WebApp AppSettings without carrying magic strings.

Writing cleaner JavaScript code with gulp and eslint

With the new ASP.NET Core 1.0 RC2 right around the corner and it’s deep integration with the node.js workflow, I thought about putting out some examples of what I use for my own workflow.

In this scenario, we’re going to see how we can improve the JavaScript code that we are writing.

Gulp

This example uses gulp.

I’m not saying that gulp is the best tool for the job. I just find that gulps work really well for our team and you guys should seriously consider it.

Base file

Let’s get things started. We’ll start off the base template that is shipped with the RC1 template.

The first thing we are going to do is check what is being done and what is missing.

/// <binding Clean='clean' />
"use strict";

var gulp = require("gulp"),
    rimraf = require("rimraf"),
    concat = require("gulp-concat"),
    cssmin = require("gulp-cssmin"),
    uglify = require("gulp-uglify");

var paths = {
    webroot: "./wwwroot/"
};

paths.js = paths.webroot + "js/**/*.js";
paths.minJs = paths.webroot + "js/**/*.min.js";
paths.css = paths.webroot + "css/**/*.css";
paths.minCss = paths.webroot + "css/**/*.min.css";
paths.concatJsDest = paths.webroot + "js/site.min.js";
paths.concatCssDest = paths.webroot + "css/site.min.css";

gulp.task("clean:js", function (cb) {
    rimraf(paths.concatJsDest, cb);
});

gulp.task("clean:css", function (cb) {
    rimraf(paths.concatCssDest, cb);
});

gulp.task("clean", ["clean:js", "clean:css"]);

gulp.task("min:js", function () {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(concat(paths.concatJsDest))
        .pipe(uglify())
        .pipe(gulp.dest("."));
});

gulp.task("min:css", function () {
    return gulp.src([paths.css, "!" + paths.minCss])
        .pipe(concat(paths.concatCssDest))
        .pipe(cssmin())
        .pipe(gulp.dest("."));
});

gulp.task("min", ["min:js", "min:css"]);

As you can see, we basically have 4 tasks and 2 aggregate tasks.

  • Clean JavaScripts files
  • Clean CSS files
  • Minimize Javascript files
  • Minimize CSS files

The aggregate tasks are basically just to do all the cleaning or the minifying at the same time.

Getting more out of it

Well, that brings us to feature equality with what was available with MVC 5 with the Javascript and CSS minifying. However, why not go a step further?

Linting our Javascript

One of the most common thing we need to do is make sure we do not write horrible code. Linting is a code analysis technique that detects early problems or stylistic issues.

How do we get this working with gulp?

First, we install gulp-eslint with npm install gulp-eslint --save-dev run into the web application project folder. This will install the required dependencies and we can start writing some code.

First, let’s start by getting the dependency:

var eslint = require('gulp-eslint');

And into your default ASP.NET Core 1.0 project, open up site.js and copy the following code:

function something() {
}

var test = new something();

Let’s run the min:js task with gulp like this: gulp min:js. This will show that our file is minimized but… there’s something wrong with the style of this code. The something function should be Pascal cased and we want this to be reflected in our code.

Let’s integrate the linter in our pipeline.

First let’s create our linting task:

gulp.task("lint", function() {
    return gulp.src([paths.js, "!" + paths.minJs], { base: "." })
        .pipe(eslint({
            rules : {
                'new-cap': 1 // function need to begin with a capital letter when newed up
            }
        }))
        .pipe(eslint.format())
        .pipe(eslint.failAfterError());
});

Then, we need to integrate it in our minify task.

gulp.task("min:js" , ["lint"], function () { ... });

Then we can either run gulp lint or gulp min and see the result.

C:_Prototypes\WebApplication1\src\WebApplication1\wwwroot\js\site.js
6:16 warning A constructor name should not start with a lowercase letter new-cap

And that’s it! You can pretty much build your own configuration from the available ruleset and have clean javascript part of your build flow!

Many more plugins available

More gulp plugins are available on the registry. Whether you want to lint, transpile javascript (TypeScript, CoffeeScript), compile CSS (Less, SASS), minify images… everything can be included in the pipeline.

Look up the registry and start hacking away!

Creating a simple ASP.NET 5 Markdown TagHelper

I’ve been dabbling a bit with the new ASP.NET 5 TagHelpers and I was wondering how easy it would be to create one.

I’ve created a simple Markdown TagHelper with the CommonMark implementation.

So let me show you what it is, what each line of code is doing and how to implement it in an ASP.NET MVC 6 application.

The Code

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using CommonMark;
using Microsoft.AspNet.Mvc.Rendering;
using Microsoft.AspNet.Razor.Runtime.TagHelpers;

namespace My.TagHelpers
{
[HtmlTargetElement("markdown")]
public class MarkdownTagHelper : TagHelper
{
public ModelExpression Content { get; set; }
public override void Process(TagHelperContext context, TagHelperOutput output)
{

output.TagMode = TagMode.SelfClosing;
output.TagName = null;

var markdown = Content.Model.ToString();
var html = CommonMarkConverter.Convert(markdown);
output.Content.SetContentEncoded(html);
}
}
}

Inspecting the code

Let’s start with the HtmlTargetElementAttribute. This will wire the HTML Tag <markdown></markdown> to be interpreted and processed by this class. There is nothing stop you from actually having more than one target.

You could for example target element <md></md> by just adding [HtmlTargetElement("md")] and it would support both tags without any other changes.

The Content property will allow you to write code like this:

1
2
3
4
@model MyClass

<markdown content="@ViewData["markdown"]"></markdown>
<markdown content="Markdown"></markdown>

This easily allows you to use your model or any server-side code without having to handle data mapping manually.

TagMode.SelfClosing will force the HTML to use self-closing tag rather than having content inside (which we’re not going to use anyway). So now we have this:

1
<markdown content="Markdown" />

All the remaining lines of code are dedicated to making sure that the content we render is actual HTML. output.TagName just make sure that we do not render the actual markdown tag.

And… that’s it. Our code is complete.

Activating it

Now you can’t just go and create TagHelpers and have them automatically served without wiring one thing.

In your ASP.NET 5 projects, go to /Views/_ViewImports.cshtml.

You should see something like this:

1
@addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

This will load all TagHelpers from the Microsoft.AspNet.Mvc.TagHelpers assembly.

Just duplicate the line and type-in your assembly name.

Then in your Razor code you can have the code bellow:

1
2
3
4
public class MyClass
{
public string Markdown { get; set; }
}
1
2
3
4
5
6
7
@model MyClass
@{
ViewData["Title"] = "About";
}
<h2>@ViewData["Title"].</h2>

<markdown content="Markdown"/>

Which will output your markdown formatted as HTML.

Now whether you load your markdown from files, database or anywhere… you can have your user write rich text in any text box and have your application generate safe HTML.

Components used

Should our front-end websites be server-side at all?

I’ve been toying around with projects like Jekyll, Hexo and even some hand-rolled software that will generate me HTML files based on data. The thought that crossed my mind was…

Why do we need dynamically generated HTML again?

Let me take examples and build my case.

Example 1: Blog

Of course the simpler examples like blogs could literally all be static. If you need comments, then you could go with a system like Disqus. This is quite literally one of the only part of your system that is dynamic.

RSS feed? Generated from posts. Posts themselves? Could be automatically generated from a databases or Markdown files periodically. The resulting output can be hosted on a Raspberry Pi without any issues.

Example 2: E-Commerce

This one is more of a problem. Here are the things that don’t change a lot. Products. OK, they may change but do you need to have your site updated right this second? Can it wait a minute? Then all the “product pages” could literally be static pages.

Product reviews? They will need to be “approved” anyway before you want them live. Put them in a servier-side queue, and regenerate the product page with the updated review once it’s done.

There’s 3 things that I see that would require to be dynamic in this scenario.

Search, Checkout and Reviews. Search because as your products scales up, so does your data. Doing the search client side won’t scale at any level. Checkout because we are now handling an actual order and it needs a server components. Reviews because we’ll need to approve and publish them.

In this scenario, only the Search is the actual “Read” component that is now server side. Everything else? Pre-generated. Even if the search is bringing you the list of product dynamically, it can still end up on a static page.

All the other write components? Queued server side to be processed by the business itself with either Azure or an off-site component.

All the backend side of the business (managing products, availability, sales, whatnot, etc.) will need a management UI that will be 100% dynamic (read/write).

Question

So… do we need dynamic front-end with the latest server framework? On the public facing too or just the backend?

If you want to discuss it, Tweet me at @MaximRouiller.

You should not be using WebComponents yet

Have you read about WebComponents? It sounds like something that we all tried to achieve on the web since… well… a long time.

If you take a look at the specification, it’s hosted on the W3C website. It smell like a real specification. It looks like a real specification.

The only issue is that Web Components is really four specifications. Let’s take a look at all four of them.

Reviewing the specifications

HTML Templates

Specification

This specific specification is not part of the “Web components” section. It has been integrated in HTML5. Henceforth, this one is safe.

Custom Elements

Specification

This specification is for review and not for implementation!

Alright no let’s not touch this yet.

Shadow DOM

Specification

This specification is for review and not for implementation!

Wow. Okay so this is out of the window too.

HTML Imports

Specification

This one is still a working draft so it hasn’t been retired or anything yet. Sounds good!

Getting into more details

So open all of those specifications. Go ahead. I want you to read one section in particular and it’s the author/editors section. What do we learn? That those specs were draft, edited and all done by the Google Chrome Team. Except maybe HTML Templates which has Tony Ross (previously PM on the Internet Explorer Team).

What about browser support?

Chrome has all the spec already implemented.

Firefox implemented it but put it behind a flag (about:config, search for properties dom.webcomponents.enabled)

Internet Explorer, they are all Under Consideration

What that tells us

Google is pushing for a standard. Hard. They built the spec, pushing the spec also very hary since all of this is available in Chrome STABLE right now. No other vendors has contributed to the spec itself. Polymer is also a project that is built around WebComponents and it’s built by… well the Chrome team.

That tells me that nobody right now should be implementing this in production. If you want to contribute to the spec, fine. But WebComponents are not to be used.

Otherwise, we’re only getting in the same issue we were in 10-20 years ago with Internet Explorer and we know it’s a painful path.

What is wrong right now with WebComponents

First, it’s not cross platform. We handled that in the past. That’s not something to stop us.

Second, the current specification is being implemented in Chrome as if it was recommended by the W3C (it is not). Which may lead us to change in the specification which may render your current implementation completely inoperable.

Third, there’s no guarantee that the current spec is going to even be accepted by the other browsers. If we get there and Chrome doesn’t move, we’re back to Internet Explorer 6 era but this time with Chrome.

What should I do?

As for what “Production” is concerned, do not use WebComponents directly. Also, avoid Polymer as it’s only a simple wrapper around WebComponents (even with the polyfills).

Use other framework that abstract away the WebComponents part. Frameworks like X-Tag or Brick. That way you can benefit from the feature without learning a specification that may be obsolete very quickly or not implemented at all.

Fix: Error occurred during a cryptographic operation.

Have you ever had this error while switching between projects using the Identity authentication?

Are you still wondering what it is and why it happens?

Clear your cookies. The FedAuth cookie is encrypted using the defined machine key in your web.config. If there is none defined in your web.config, it will use a common one. If the key used to encrypt isn’t the same used to decrypt?

Boom goes the dynamite.

Content In HTML? Compile it from Markdown instead.

Most content on our blogs, CMS or anything relating to content input is, today, created with WYSIWYG editors. Those are normally in a browser but can also be found in a desktop application.

Browser version

Libraries like Bootstrap WYSIWYG and TinyMCE leverage your browser to generate HTML that isn’t particularly stylized (no CSS classes or styles) instead relying on the webpage style to render properly. Those by itself are not too complicated. However, they are enclosed in the HTML semantic at the moment of writing.

When writing your content in a CMS, it will be stored as-is and re-rendered with almost the exact same HTML that you created at first (some sanitize your input and sometimes append content).

Problem for code-based blog

Most blogs devoted to blogging will contain some code at a certain point. This is where stuff starts to smell. Most WYSIWYG editors will generate the code and wrap it into pre/code or both tags. Then it will have a class attribute that will be tied to the code generated at the moment.

My blog has been migrated multiple times. At first I was on Blogger, then on BlogEngine.NET and finally on MiniBlog. All those engines stored the code as it was written with the editor. Worse even if I used Live Writer since it will not strip style attributes and other nonsense. At best, you end-up with some very horrible HTML that you have to clean in between export/import. At worse, there is some serious issue with how your post are rendered.

The content is rendered as it was written but not as it was meant to appear. Writing code without Live Writer? Well, you’ll need to write some HTML and don’t forget the proper tags and correct CSS class!

Content as source code

I see my content as source code. It should be a in a standard format that can be compiled to HTML upon my need. Did my Syntax Highlighter plugin changed and I now need to re-render my code tags? I want to just toggle some options. Not re-go through all my content doing Find & Replace.

I want to manage my content like I manage my code. A bug? Pull Request. Typo? Pull Request.

That is why my blog is going to end-up in a GitHub repository very soon. Easier to correct things.

But why not HTML?

HTML as well as it’s “human readable” really is not. Once you start creating complex content with it, you need a powerful editor to write HTML. Nothing that can be done in Notepad.

Writing a link in Markdown is something like this:

1
[My Blog](http://blog.maximerouiller.com)

And in HTML it goes to this:

1
<a href="http://blog.maximerouiller.com" alt="">My Blog</a>

That’s why I think that a format like Markdown is the way to go. You write the semantic you want and let the markdown renderer generate the proper HTML. Markdown is the source. HTML is the compilation.

Do I want to generate an EPUB instead? You can. The ProGit book is completely written in Markdown.

Do I want to generate a PDF instead? You also can. In fact, Pandoc supports a lot more. It got you covered for HTML, Microsoft Word, OpenOffice, LibreOffice, EPUB, TeX, PDF, etc.

If all my blog post were written in Markdown, I could go back in time and offer a PDF/EPUB version of every blog post I ever did. Not as easy with HTML if things are not standardized.

Converting HTML to Markdown

I’m currently toying with Hexo. It has many converter including one that supports RSS. It managed to import all my blog post (tags included) but I was left with a bunch of Markdown files that needed some very tough love.

Just like any legacy code, I went through my legacy writing and removed all remaining HTML left from the conversion. Most of it was removed mind you. But the code one? They could not be converted properly. I had to manually remove pre and code tag every where. Indentation also was messed up from previous import. This had to be fixed.

Right now, I regenerated a whole copy of my blog without breaking any article link. All the code has been standardized, indented and uses a plugin to display them properly. I change the theme or blog engine? I just take my MD files with me and I’m mostly good to go.

Deploying it to Azure

Once you have a working directory of hexo running, it will generate its content in a public folder.

Since we only want to deploy this, we will need to add a file name .deployment at the root of our repository.

It’s content should be:

1
2
[config]
project = public

You can find more options about this on the Kudu project page about Customizing Deployments

Issues left to resolve

Unless I’m moving to an engine like Jekyll or Octopress, most blog engines do not support Markdown files as blog input. We’re still going to have to deal with converter for the time being.

Renewed MVP ASP.NET/IIS 2015

Well there it goes again. It was just confirmed that I am renewed as an MVP for the next 12 months.

Becoming an MVP is not an easy task. Offline conferences, blogs, Twitter, helping manage a user group. All of this is done in my free time and it requires a lot of time.But I’m so glad to be part of the big MVP family once again!

Thanks to all of you who interacted with me last year, let’s do it again this year! :)