EntityFramework Performance and IEnumerable vs IQueryable

Working in the .Net world, you get pretty used to dealing with IEnumerable collections. However, you have to be aware of performance issues that can arise when using them with EntityFramework. Sometimes I forget about IQueryable because LINQ to Entities obfuscates much of the difference of retrieving objects from a database vs dealing with an in-memory collection and IQueryable is pretty specific to dealing with querying a database.

When using the Repository pattern, one of the things I love to do is add a flexible “Find” method to the repository. Below is an example:

public partial class Order
{
    public int ID { get; set; }
    public string CustomerItemNumber { get; set; }
    public DateTime? ShipDate { get; set; }
    public int OrderTypeID { get; set; }
}

public class FindOrdersReqeust
{
    public IEnumerable<int> OrderIDs { get;set; }
    public IEnumerable<string> CustomerOrderNumbers { get; set; }
    public IEnumerable<int> OrderTypeIDs { get; set; }
    public bool? HasShipDate { get; set; }
    public DateTime? ShipDateBefore { get; set; }
    public DateTime? ShipDateAfter { get; set; }
    
    public FindOrdersReqeust()
    {
        OrderIDs = new List<int>();
        CustomerOrderNumbers = new List<string>();
        OrderTypeIDs = new List<int>();
    }
}

//I would normally implement an interface here, but for the sake of brevity am excluding from this example
public class OrderRepository
{
    private IDbContext context;
    public OrderRepository(IDbContext context)
    {
        this.context = context;
    }

    public IEnumerable<Order> Find(FindOrdersRequest request)
    {
        IEnumerable<Order> orders = context.Set<Order>().AsEnumerable();
        if(request.OrderIDs.Any())
        {
            orders = orders.Where(o => request.OrderIDs.Contains(o.ID));
        }
        if(request.CustomerOrderNumbers.Any())
        {
            orders = orders.Where(o => request.CustomerOrderNumbers.Any(x => o.CustomerOrderNumber.Equals(x)));
        }
        if(request.OrderTypeIDs.Any())
        {
            orders = orders.Where(o => request.OrderTypeIDs.Contains(o.OrderTypeID));
        } 
        if(request.ShipDateAfter.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate >= request.ShipDateAfter);
        }
        if (request.ShipDateBefore.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate <= request.ShipDateBefore);
        }
        if(request.HasShipDate.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue == reqeust.HasShipDate.Value);
        }

        return orders;
    }
}

public class OrderService
{
    private OrderRepository orderRepository;
    public OrderService(OrderRepository orderRepository)
    {
        this.orderRepository = orderRepository;
    }

    public void GetUnshippedOrders()
    {
        return orderRepository.Find(new FindOrdersRequest()
        {
            HasShipDate = false
        }).ToList();
    }
}

The major problem with this code is highlighted in the example above – namely that AsEnumerable() call on the context.Set<T> will enumerate every row from the database in full. The Sql generated will be equivalent to a SELECT *, and will probably look something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate]
    FROM [dbo].[Orders] AS [Extent1]

Now, you might not notice if you have 10 rows, but if you have 1,000,000, you’ll notice as your application burns to the ground and consumes all the memory on whatever server it’s running on.

So, an easy fix is to change that one line to IQueryable / AsQueryable() like so:

IQueryable<Order> orders = context.Set<Order>().AsQueryable();

Now we get the benefits of deferred execution until ToList is called on the results of the Find method from OrderRespository. The SQL generated will now be something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate],
    [Extent1].[OrderTypeID] AS [OrderTypeID]
    FROM [dbo].[Orders] AS [Extent1]
    WHERE [Extent1].[ShipDate] IS NULL

This is a huge improvement already, but it can be much better than this. In the OrderRepository class, if we also make the return type IQueryable, we can then further query the database before pulling the results into memory.

public IQueryable<Orders> Find(FindOrdersRequest request)
{
    //the rest of the method remains the same
}

This distinction is important, and I will provide an example.

After I add this “Find” functionality to my repositories, I tend to build reports using those methods, which frequently utilize .GroupBy() after filtering. If we were to leave the Find method as returning an IEnumerable<Order> collection, we would find that the SQL generated would not be what we wanted.

For example, let’s say I now wanted a report that showed the number of shipped and unshipped orders by OrderTypeID. I would add a method to the OrderService as such:

//assume that an object ReportItem exists with properties as defined below
public IEnumerable<ReportItem> GetUnshippedOrdersByTypeReport()
{
    var report = new List<ReportItem>();
    var results = orderRepository.Find(new FindOrdersRequest(){ HasShipDate = false })
                                 .GroupBy(order => new 
                                 {
                                     OrderTypeID: order.OrderTypeID
                                 })
                                 .Select(g => new
                                 {
                                     OrderTypeID: g.Key.ID,
                                     ShippedOrderCount: g.Sum(x => x.ShipDate.HasValue),
                                     UnshippedOrderCount: g.Sum(x => !x.ShipDate.HasValue)   
                                 });
    foreach(var result in results)
    {
        report.Add(new ReportItem()
        {
            OrderTypeID: result.OrderTypeID,
            ShippedOrderCount: result.ShippedOrderCount,
            UnshippedOrderCount: result.UnshippedOrderCount,
        });
    }
   
    return report;
}

With this code, the benefits of using IQueryable in the repository are clear over IEnumerable.

If we leave the repository with IEnumerable, the SQL generated will be done in two phases:

  1. The filtering of the “FindOrdersRequest” will be executed as the SQL statement above and the results will be stored into a temporary IEnumerable collection
  2. The Group By operation will operate on this temporary collection. More trips to the database will be taken if any navigation properties are referenced in the GroupBy (there are none in this example)

If we change the repository to use IQueryable, the SQL generated will be done in a single, neat statement. It will filter and perform the group by at once, resulting in much better performance. Another performance benefit is that we are projecting specific columns and not populating an entire Order object with every field. For this trivial example, it doesn’t make much of a difference, but if you’re dealing with tables/objects that have many columns/properties, you will notice. If you’re deploying to a cloud-based environment, you know that compute time and efficiency matter a lot, so following best practices for performance will help you out a lot in that respect.

 

How to Add and Manage Outlook Rules/Filters for Office 365 Shared Mailboxes

It isn’t obvious, but you can setup and manage rules for shared mailboxes in Office 365 just as you do for users’ mailboxes. It isn’t obvious how to it is because you can’t administer these rules through the desktop client (or any other client) like you can with user mailboxes, and the settings for managing these rules doesn’t even appear to be available from the Office 365 or Exchange administration panel. Here are the steps to take:

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Enter the following Url into your browser to get to the shared mailbox options page: https://outlook.office365.com/ecp/<email address>, where <email address> is the email address of the shared mailbox with the rules you want to manage.
    • For example, if your email address was info@contoso.com, you would enter the address https://outlook.office365.com/ecp/info@contoso.com
  • Click the “Organize email” section on the left menu

This method also works for editing user mailbox rules, provided you have access.

Note: As of this writing, Internet Explorer is the only browser I have tried that successfully adds a rule that involves a sent from or sent to criteria. Chrome and Firefox both give a CORS error because selecting people tries to open the contacts for the account, and that application is part of a different domain. The message I receive from Firefox is:

Load denied by 08:59:07.114 Load denied by X-Frame-Options: https://outlook.office.com/owa/#viewmodel=OwaOptionRichPeoplePickerViewModelFactory does not permit cross-origin framing.

Managing Global Rules

If you simply want to edit global rules that affects all mail flowing to/from your organization, you can follow the steps below:

Getting to the Exchange Admin Center

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Open the Admin Center by clicking the “Admin” button
  • When the Admin Center opens, click the “Admin Centers” link on the left-side menu and choose “Exchange”

Managing Rules

  • From the Exchange Admin Center, there is a “Rules” link under the mail flow section. Remember, this area only allows you perform a limited set of actions, that does NOT include moving a message to a specific folder, as these rules are at a global level. The actions you can perform here are:
    • Forward the message for approval…
    • Redirect the message to…
    • Block the message…
    • Add recipients…
    • Apply a disclaimer to the message…
    • Modify the message properties…
    • Modify the message security…
    • Prepend the subject of the message with…
    • Generate incident report and send it to…
    • Notify the recipient with a message…

Windows Server Backup Trials and Tribulations

I know I’m not the first person to say this (2 people advised me of this today), but don’t use Windows Server Backup. Get a real backup solution and save yourself some serious headaches. I know I will be after my latest fun with WSB. *Caveat – my experience is with W2K8r2 servers, so perhaps it’s been improved in newer iterations.

I’ve been managing server hardware for a couple of small businesses for about 10 years now, and I’ve always just used Windows Server Backup (in conjunction with Azure backup, as of a couple of years ago) to backup said servers. Today was one of those moments where you have fleeting thoughts of “Am I going to get fired? Did I just lose all of the data for this organization?” I’d like to believe that I have better safeguards in place than to allow that to happen (and it turns out, I was able to get it working again), but man does it scare the crap out of you when you can’t get things working.

Considering my server is in a RAID 5 configuration, you might think “What is the problem? Swap out the failed drive and move on.” That’s what I normally do when I encounter a degraded virtual disk, but this time it didn’t work because a second disk failed during the rebuild process (different from the disk that caused the degradation in the first place). The system was fine until a several-hours-long power outage depleted my battery backup’s power, causing the system to go down in the dead of night. As a side effect, my virtual disk was degraded, and two of the physical disks comprising the virtual drive reported problems. The virtual disk kept failing to rebuild, and pulling out the drive that was keeping it from rebuilding would rendered the machine useless, since it wasn’t the hot spare.

I woke up extra early the next morning to come in and restore the backup image that was taken a few hours before that. The goal was to replace all of the failed physical disks, delete the virtual disk and create a new one with the same parameters as the old, and then restore the image – all before business started that day.

I guess I forgot how much of a PITA Windows Server Backup was to use. Here are the instructions I left myself from the last time I had a multi-disk failure:

  • A Windows System Image backup must be performed before attempting to restore.
    • Verify this before you nuke your virtual disk
  • Since the server has a Dell PERC s300 RAID controller, Windows needs the appropriate drivers to work with the RAID controller when restoring the image. Go to the Dell Support website and enter the Service Tag for the server.
    • Find the Windows Server 2008 R2 RAID / PERC s300 driver in the list of available drivers.
    • Download the “Hard Drive” format file (an EXE) and run it on the computer where you downloaded it (running the file will extract the drivers to a folder of your choosing)
      • Doing so will extract the drivers to a file on the computer where the EXE was run
    • Place the folder of drivers from the previous step onto an external drive (or burn to CD)
  • Check the settings of your RAID configuration – verify size and caching options so you can use them for your new image
  • Power off the server
  • Swap out any bad hardware for new drives
  • Restart the server and enter the RAID configuration menu (Ctrl + R).
    • Find the virtual disk and delete it (This can be also be done in OpenManage Server Administrator prior to rebooting).
    • Initialize any new physical disks for use in the virtual disk.
    • Create a new virtual disk, matching the configuration to what it was previously (usually all available space).
    • Make sure to swap the virtual disk into virtual disk slot 1 (the only bootable one), if you have multiple virtual disks in your array.
  • Install the Windows Server OS disc into the DVD tray
    • Continue to boot
  • When Windows Setup loads, choose the language and hit the next arrow.
  • Choose the “Repair an installation” link.
  • Click the “Load Drivers” button
  • Plug the external drive with the downloaded drivers into the server and choose the optical drive when Windows prompts you for the driver location.
  • Plug the external drive with the backup into the server
  • Choose the Restore from System image option and hit next
  • Windows Server Backup should now detect any backups from the external drive
    • Note, this can take a reallllly long time (> 30 minutes)
  • Choose next and Finish. The image should restore after a few hours.

One would think that with these detailed instructions, it should be fairly easy to restore the image. The answer is no. For one thing, Windows Recovery seemed to really struggle to recognize my external backup drive. This is especially disconcerting and leads you down some incorrect paths (starting to think the data is corrupted and trying to fix problems with the disk). Eventually, I got it to work through trial and error, but it took far too long and interrupted business operations far longer than it should have. Some problems I encountered:

  • The “Load Drivers” step is essential if you are restoring your image over an existing drive (and not starting from scratch). It may also be essential if you are starting from scratch – it never worked for me without loading those drivers, so I think it’s a good idea anyway (assuming you’re working with a RAID controller)
  • Feedback with the System Image Recovery is extremely poor. You have no idea what is going on most of the time. You are simply stuck with progress bars that never end. You really just have to wait and hope that it completes.
  • My first attempt at creating the virtual disk resulted in a just slightly smaller size than “all available space”, so during the image restore process, I received the helpful error “A data disk is currently set as active in BIOS. Set some other disk as active or use the DiskPart utility to clean the data disk, and then retry the restore operation. (0x80042406).” I opened up the diskpart utility and cleaned the data disk, but as I suspected, the problem was really something else. The disk size has to match (or exceed) the cloned drive
  • At one point, I loaded Windows as a fresh install and tried to restore from Windows Server Backup within the OS. The only problem with that is that you can’t do a bare metal restore – you can only restore files and drives. While this was better than nothing, I didn’t like the idea of trying to figure out how to reconfigure all the software on that machine. I’m glad I stuck with the image restore.
    • Windows Server Backup has always been incredibly slow, as well. For the most part, I use the command line when running backups with it, but the user interface is ungodly slow. Just opening it and getting to the initial view is really laggy.
  • It seemed like it mattered when I plugged my external drive in. If I had the drive plugged in when I started Windows Recovery, the image restore never found it. But when I plugged it in after loading the drivers, it seemed to be okay

In general, unresponsive software is a huge pet peeve of mine. Let me know that something is going on. I don’t know whether something is hanging or just takes forever. Also, no software should be this picky – especially something involved in a potentially mission critical application.

If you’ve encountered problems trying to restore your Win2K8r2 server and you’re using RAID-5, these steps might help. As I discovered today, the internet is full of reports of problems restoring from WSB (most posts are older because Server 2008 R2 is pretty old now, in OS years).

This episode did help me reflect on some problems though – namely, that I need to have more resilient processes in place for problems like this. For example, what if the motherboard failed on this server? What do I do then? Have Dell overnight a motherboard for this? Redundancy is non-existent here. Furthermore, this is an area where you need to practice – I’m sure if I had done restores more than 2-3 times in my entire life, it wouldn’t have been so bad. Developing some kind of a failure scenario like the infamous “Chaos Monkey” would help make things much more resilient. At the very least, better documentation on the services and applications installed on the server now will help in the event of a catastrophic loss or even just a migration.

As a developer, though, the correct answer is really to just virtualize all the things so I don’t have to worry about disk failures anymore 🙂

Asp.Net Core appsettings tips

I’ve recently had the opportunity to work on a new project where I was able to use Asp.Net Core for the first time. Well, not completely – I’ve contributed to an open source project that has been using .Net Core for some time, when it was called DNX or ASPNET 5. Anyway, the work I did there really was focused on writing code for the application, not configuring the infrastructure.

A lot has changed, but the changes are largely for the better. There are a few things that tripped me up, so I figured I’d write about them here.

AppSettings have gone JSON

This in and of itself isn’t much of a revelation, but I, for one, am glad to have JSON configuration over XML. In the Startup.cs file, appsettings are configured by default as such:

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

Just like before, when we had Web.config, Web.Release.Config, Web.{EnvironmentName}.config, any environment configuration will be applied on top of the rules defined appsettings.json file. So, if you have an appsettings.json file that looks like:

{
    "MyVariable1": "value1",
    "MyVariable2": "value2"
}

and then you define a file appsettings.production.json that looks like:

{
    "MyVariable1": "productionValue",
}

The production file’s value will be used for MyVariable1 when the application is running in a production environment, as expected.

Accessing appsettings

The easiest way to access a value from your appsettings file is to use Configuration.GetValue:

Configuration.GetValue("MyVariable1", "");

The above will retrieve the value for MyVariable1 or an empty string if there is no key found for MyVariable. The nice thing is you don’t get an exception if a key isn’t found, but this could be an issue if you were expecting a key and get the default value instead.

If your appsettings file has nested objects like this:

{
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    }
}

you can retrieve values by using Configuration.GetValue(“Logging:LogLevel:Default”);

Personally, I don’t like to use magic strings – I prefer to use a strongly typed configuration.

Strongly Typed appsettings

Rick Strahl has a very good article about Strongly typed appsettings, but I will cover the basics. In a nutshell, you need to do two steps to make this work:

  1. Create a class that has all of the corresponding properties of your appsettings (or just a subsection of your appsettings, as I will show below)
  2. Wire up your class by calling the services.Configure<T> method In the ConfigureServices method of your Startup.cs class

Let’s use the following appsettings.json file as an example:

{
    "MySettings" : {
        "AdminEmail" : "admin@email.com",
        "ErrorPath" : "/Home/Error"
    }
}

All we need to complete step 1 is to have a corresponding class for these settings. Here is the corresponding example:

public class MySettings
{
    public string AdminEmail { get; set; }
    public string ErrorPath { get; set; }
}

Now, in our Startup.cs class, we can add the following to our ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.Configure<MySettings>(Configuration.GetSection($"{nameof(MySettings)}"));
}

That’s it. Now, we can simply inject MySettings into our MVC/WebAPI controller constructors and Web API will be able to inject that dependency for us.

Note that in this example we called Configuration.GetSection and gave it the name of our section/class – if you only listed the keys AdminEmail and ErrorPath at the root of the appsettings file (without any nested objects), you could have done the same by calling just services.Configure<MySettings>(Configuration);

Using appsettings in your Startup.cs class

One gotcha that had me stumped for a little while was trying to use some of my appsettings configurations to provide configurations in my Startup.cs class. The trick here is using the Bind method on configuration. Here is a good example of what I mean: a lot of tutorials and examples will show configuring exception handling as:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseExceptionHandler("/Home/Error");
}

I like to make that route configurable in my appsettings, so here is how to do that:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    var mySettings = new MySettings();
    Configuration.GetSection($"{nameof(MySettings)}").Bind(mySettings);

    app.UseExceptionHandler(mySettings.ErrorPath);
}

Sending Labels to a Thermal Printer using POST Requests

If you’ve ever used a Thermal printer on a website like Fedex.com, you know it can be kind of ugly to get your label to print. As of this writing, FedEx.com still relies on a Java plugin to do the dirty work of sending the data to your locally-connected printer. I have a couple of applications that integrate with FedEx web services and in the past, I too have relied on Java applets (jZebra, which became qzPrint) to do the work of sending a print job via a web browser.

After searching around, I found a better way to do it: use a simple POST request to send to a network printer. This article is what really got me started: Label And Receipt Printing – Printing from Websites part 2

The sample code shows how you can create a simple XmlHttpRequest object and send it the EPL/ZPL you want. Here is a barebones sample:

var zpl = "^XA^PW400^LL200^FO20,20^A0N,30,30^FDTest^FS^XZ"; //some zpl to send to the printer
var zebraPrinterUrl = "http://192.168.0.100/pstprnt"; //ip address of the printer
var request = new XMLHttpRequest();
request.onload = function () {
  //take some action
};
request.onerror = function () {
  //take some action
};
request.open("POST", zebraPrinterUrl, true); 
request.setRequestHeader("Content-Length", zpl.length);
request.send(zpl);

That’s really all there is to it.

A couple of caveats – as the article I linked above notes, CORS can be a bit of a problem. The zebra printer does not return the necessary Access-Control-Allow-Origin header, so I found this to be disruptive when I tried to use the AngularJS (1.0) $http service. Sending the post using $http would result in a response coming back with status 0, which indicates a CORS problem.

Therefore, I ended up using the XMLHttpRequest object in my application, which can significantly impact testability, unless you wrap the instantiation of XMLHttpRequest objects in another component that you inject into your controller or service.

Finally, because I’m using promises and asynchronous requests, I had to make sure that all of the labels being printed complete before resolving or rejecting the promise.

This solution obviously isn’t very scalable, but I do think it provides much greater flexibility than the old way of using java or some browser plugin to do the job.

Changing from Underscore to Lodash

I have read in a few places that lodash is the way to go (over underscore) when it comes to javascript collection manipulation libraries, but I hadn’t gotten around to changing it out on any of my applications using underscore until recently. I also had a conversation with one of the contributors to the excellent moment js library who told me that lodash is the way forward, and there are some posts out there on the internet that suggest this is the way to go.

Doing the upgrade, I did find there are a couple of functions that aren’t supported by lodash that are by underscore. The list I have found is below:

  • pluck (use map instead – it appears this was changed sometime in January with the release of 4.0: http://stackoverflow.com/questions/35136306/what-happened-to-lodash-pluck)
  • any (use some instead – I had used any because it is operates much like the LINQ Any method)

I will add more here as I come across them.

Local Web Development with Adobe Typekit

My bread and butter is really front-end scripting (JavaScript) and server side technologies (c#, WebApi, MVC, etc), so I feel like any time I really venture into the world of design, I learn something new — which is great!

While working with a designer on a new website, she requested we use Adobe Typekit to make better use of some nicer fonts on the web. Using great looking fonts has long been a challenge, but Typekit seems to offer a pretty slick solution to give you better control. One quick aside, I noticed that my Ghostery plugin doesn’t really like Typekit fonts and by default blocks typekit scripts. So be aware that users may not see your glorious fonts anyway if you use Typekit and they have Ghostery installed. If you want to know more about Ghostery and Typekit, here is a resource for you.

Using Typekit on Your Webpage

Now, in general, Adobe has made it pretty simple to use Typekit with a website. The basic steps are as follows:

  1. Create a “Kit”. On typekit.com (after logging in), there is a menu item called “<> Kits” that has an option “+ Create new kit”. A kit is a grouping of fonts you would like to use on your website. Once created, this kit will be assigned a unique ID that will be referenced in the Javascript Adobe provides for you to embed in your site.
  2. Add some fonts to your kit. Navigate to your library and find a font you like. Select that font and you will be given an option to “Add to Kit” (there is a button that reads “Web use: Add to Kit”). Once you have added all the fonts and variants to your kit, you are ready for the next step
  3. Add domain(s) to your kit settings. To do this, navigate back to your kit (using the “<> Kits” dropdown menu in the header) and choose “Kit settings”. From here, you can specify any domains where you will be using this kit (among other options). For development purposes, you can enter “localhost”. In theory this is all you have to do to enable Typekit to work with these domains. I found some obstacles in that, hover. Be sure to save your changes.
  4. Publish your kit. After everything is correct in your settings, go back to your kit page and click the publish button in the lower right corner. It says it can take a couple of minutes for changes to become active, so be a little patient.
  5. Grab the embed code (JavaScript) and put it into your site’s head. Right next to where you updated the settings, you can see a “Embed Code” link that will present a popup with your default code that should look something like this:
    <script src="https://use.typekit.net/myKitId.js"></script>
    <script>try{Typekit.load({ async: true });}catch(e){}</script>

    Where the javascript file “myKitId.js” is replaced with whatever kit ID was assigned to your kit.

In theory, this is where everything should just work. All you need to do is use your fonts in CSS as they are named (all lowercase, replacing spaces with hyphens). An example of a page is as follows:

<html>
    <head>
        <script src="https://use.typekit.net/myKitId.js"></script>
        <script>try{Typekit.load({ async: true });}catch(e){}</script>
        <style type="text/css">
            body { font-family: sans-serif; }
            .myFont { font-family: "futura-pt"; }
        </style>
    </head>
    <body>
        <p>This text should have font-family sans-serif</p>
        <p class="myFont">This text should have font-family futura-pt.</p>
    </body>
</html>

Of course, you have to replace font-family with whatever fonts are defined in your kit (and replace the Kit ID), but you should see those two sentences with different fonts. If you don’t, then something isn’t right.

Figuring Out How Typekit Loads Fonts

I saw the fonts load correctly on my public domain, but they wouldn’t show on my development machine (using localhost).

I noticed there was a “Show Advanced” link on Typekit’s “Embed Code” page, which displayed a self-executing JavaScript function that had the added benefit of asynchronous loading. All this “advanced” code really does is load an external JS file into a script tag inserted into the head and provides some CSS classes to hide some page flicker before the Typekit fonts are displayed. There is a pretty good explanation of the differences between the default and advanced Embed code here: https://helpx.adobe.com/typekit/using/embed-codes.html. Unfortunately, this didn’t have anything to do with the problem at hand.

Now, to really figure out why this wasn’t working, I pulled up Firebug and took a look at the HTTP requests in the “Net” tab. I could see that the request to my Typekit JavaScript file wasn’t completing. While I couldn’t figure out why, I tried pasting the url of the request into my browser, and it loaded the Javascript up for me. The javascript displayed was compressed and pretty hard to interpret. I decided to save it and load it locally to see if it would make a difference.

Unfortunately, still no luck. But what I could see now, is that inline CSS was being added to the head of my document. The CSS looked like the below:

@font-face {
    font-family: "futura-pt";
    font-style: normal;
    font-weight: 400;
    src: url("urlToFont") format("woff2");
}
@font-face {
    font-family: "futura-pt";
    font-style: normal;
    font-weight: 300;
    src: url("urlToFont") format("woff2");
}

Storing the Fonts Locally and Serving them Up

It was at this point that I knew I could work around Typekit’s issues. By navigating to the urls in src attribute of the @font-face selectors, I was presented with a file to download. The file downloaded as just “l” (no extension). I guess the file extension would be .woff since the format listed in the CSS file was “woff2”. After downloading each font file and renaming them to something that represented what the font was (“futura-pt-book.woff” and “futura-pt-light.woff”), I put them in a “fonts” directory in my development folder.

Next, I created a fonts-local.css file that contained the CSS code that was being generated by Typekit (see the code snippet above) and stuffed a reference to that CSS file into the head of my document. I replaced the src: urls in the stylesheet with the path to the woff files in the directory I created above.

Finally, in my site template’s html head section, I removed the script calls to Typekit altogether and just referenced my local files.

This technique isn’t exactly ideal, because any changes I make to the kit won’t be propagated to this page (also, your web server has to be configured to serve woff files). However, it allows you to see your fonts locally during development if Adobe’s recommendations don’t work, and also gives you an option if you’re concerned about the whole Ghostery plugin / Adobe privacy concerns issue.

You might be asking, “Why didn’t you just download the .woff files from Typekit?” and skip all the in between, but the problem is that the .woff files don’t seem to be available for direct download. That’s probably because now I could take these woff files and use them with any site or distribute them (illegally), and I think they’re trying to control that kind of behavior as much as possible.

Let’s Encrypt + Azure = Win!

Back in 2014, it was reported that several tech companies were jointly forming a non-profit with the goal of offering free encryption services (SSL certificates, tools, etc) for the entire internet. Fast forward to today (or earlier month, technically), and they have issued over 10 million certificates.

As someone who has purchased certificates for my organization, I could see how the cost would be prohibitive for people who just want to have a blog or a small site to purchase a certificate. It also hasn’t always been easy to configure.

I first heard about this project earlier this year and decided to implement it with an Azure site I had setup for a local organization I volunteer with. The configuration aspect did take a little bit to get setup, but so far, it has been really great.

One of the nice things about the Azure service is that they setup a WebJob to autorenew the certificate for you. The certificates are only issued with a 3-month lifespan, so renewal would get annoying if you had to constantly be rekeying the certificates.

I found this resource incredibly useful in assisting me with the setup and configuration of my certificate in Azure: https://gooroo.io/GoorooTHINK/Article/16420/Lets-Encrypt-Azure-Web-Apps-the-Free-and-Easy-Way/21872

 

EF Code First Notes

I just wanted to make a couple of further notes about EF Code First that I’ve discovered since I started using it:

  1. Cascading Deletes: I don’t like cascading deletes – I prefer notification that linked resources must be removed before a resource can be removed. I find it’s too easy to end up deleting things you don’t want to with Cascading deletes on. To disable this behavior by default, you have to update the ModelBuilder’s conventions in your DbContext class like so:
    public partial class MyContext: DbContext
    {
       static MyContext()
       {
           Database.SetInitializer<MyContext>(null);
       }
    
       public MyContext() : base("MyConnectionString")
       {
       }
    
       protected override void OnModelCreating(DbModelBuilder modelBuilder)
       {
           modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>();
           modelBuilder.Conventions.Remove<ManyToManyCascadeDeleteConvention>();
           //Configurations go here
       }
    }

    Note that after you do this, your next Add-Migration will result in EF attempting to drop and add all of your Foreign Key constraints. Since I migrated my code base to EF code first from model first, my database already had cascading deletes disabled for existing tables. Therefore, I just deleted the Add/Drop Foreign Key code from my migration code

  2. Creating integer primary key columns that aren’t identities: This one is pretty simple, but when mapping your entities, you must do the following:
    this.Property(t => t.Id)
        .HasDatabaseGeneratedOption(DatabaseGeneratedOption.None);
  3. Unique Indexes – By default, there isn’t a method on PrimitivePropertyConfiguration that allows unique indexes to be created. I found a great solution on StackOverflow that creates an Extension method that will allow you to create unique indexes.

     

That Conference – Day 2

My second day at That Conference began with a Keynote delivered by Zach Supalla, CEO of Particle, a startup that has created an Internet of Things (IoT) cloud platform. I’ll be honest, when I saw the speaker information ahead of the conference, I wasn’t sure I was going to be interested in hearing another startup story, but I’m really glad I got a chance to hear him speak. I really liked how his first attempt at a consumer product (an adapter for existing lightbulbs that allowed for wifi connection and thus electonic control from other devices) was created as a solution to his father’s hearing loss. The idea was that the lightbulb would flicker when his father’s cellphone rang, alerting him that a call was incoming. While his landline phone used to do this because the house was wired to support this, the cellphone had no such connection. All in all, I really liked the products that his company has produced and the story was pretty inspirational. Even though this product didn’t succeed, they were able to pivot and find an even larger impact with their IoT platform. Zach’s core messages of learning to perservere in the face of adversity and truly listening to their customers was very inspirational – with the right idea, team, and luck, anyone can succeed.

After the keynote, I kicked around the idea of attending several sessions. There were a lot of interesting ones, but I decided to check out the Open Spaces and talked about becoming a Microsoft MVP. I have never really been sure how people got to be Microsoft MVPs, but the process seems pretty simple: put together a strong body of work by demonstrating a passion for your work and the overall impact in your area of focus. After completing an application, your nomination will be reviewed.

At lunchtime, I went to a presentation given by Inrule about their software that allows businesses to define vocabulary and business rules that remove a dependency on hard-coding rules. The core problem their solution addressed was communication failures between the use cases and user stories that business personnel use compared to the developer implementations. Why have a developer interpret the rules when someone who is actually using the software can define the rules themselves. The presentation was really well done, as they used a Star Wars metophor to demonstrate the problems that changing business rules can introduce (particularly when businesses have sizeable amounts of red tape to cut through in order to implement a change).

For the 1:00 session, I attended a discussion called C#: You don’t know jack by George Heeres. There were a lot of useful tidbits involved in the discussion – some of which I was aware with others being new to me. A lot of the things he presented were things I had learned at one point or another but had forgotten due to lack of use. There was a discussion of string allocation and memory consumption and use of IFormatProvider/ICustomFormatter to allow POCOs to have custom String.Format options. One of the most interesting ideas were the ways to create attributes for enums by using extensions methods on the classes that consume the Enums. These attributes allow you to provide an English readable name for each enum. There was some discussion at the end about returning IEnumerable collections on models from the database instead of IList or List because IList has methods that imply functionality that may not be there (add, update, remove). The argument was that those terms mean add/update/remove to a data store, although I see them as valid because you are modifying the collection regardless of whether that collection is in memory or part of a datastore. Context about where you are using the collection is important.

The 2:30 session wasn’t one of my favorites. It had to do with starting out with IoT, but the presentation was very fast-paced with the intent of doing live demos of several objects built by the presenters. Unfortunately, they ran into problems connecting to wifi, both locally and in the cloud, so they ended up spending a lot of time trying to get their connection working. I am not sure whether they ended up getting everything to work because I left to check out the Open Spaces. Earlier in the day, I had met Maggie Pint, a Microsoft software engineer who also maintains the excellent moment.js library, and she told me about her talk going on at 1:00 and Open Spaces session at 2:30. It was clear after talking with her for just a couple of minutes that she was an expert in her topic regarding the complexities of date and time (anyone who has spent any time thinking about date and time across time zones knows how complicated this topic really is). I caught a little bit of the discussion at her 2:30 open space discussion, but not as much as I would have liked. In retrospect, I really wish I would have been able to attend two 1:00 sessions.

My last session for the day was about domain-driven data, given by Bradley Holt. It was a thought-provoking session that related strongly to Eric Evans 2003 book Domain Driven Design, which I partially read some years ago, and recall being very influential. After the discussion, I think I need to re-read some parts because a lot of the concepts are still relevant to today’s industry. I’ll admit that I haven’t had to think much about my data store – I have mostly worked with low-scale websites, so a relational database has always been the defacto choice for me. However, a strong case was made for a model which leverages the strengths of different persistence strategies: Key-Value stores, document-driven databases, graph databases, relational database models, and some others. One of the most interesting things about the discussion was in the discussion of Aggregate Roots (a DDD concept from the aforementioned book). In the strictest sense, aggregates that are not the root should not also be an aggregate root in another scenario. Consider the following object:

public class Order : IAggregateRoot
{
    public int Id { get; set; }
    public int CompanyId { get; set; }

    public virtual Company Company { get; set; }
}

In this case, the Company object is an aggregate, but not an aggregate root. Therefore, the Company object should not be an Aggregate Root of its own. One solution proposed by another attendee was to have an “OrderCompany” object that had foreign key constraints to the Order and Company table. This way, Company can still exist as an aggregate root, and so can Order. These kinds of sessions are my favorite because they really make me think about architecture, one of my favorite things.