Fix: Visual Studio doesn’t remember last open documents

After installing Visual Studio 2017 a few months back, I noticed that some projects were loading strangely, while others loaded just fine. The two main issues I experienced were:

  • Documents I had open on my previous run of VS wouldn’t load upon running the Visual Studio 2017 application
  • Windows I had arranged in my multi-monitor layout were not loading where I expected them

A quick Stack Overflow search led me to the answer regarding the first: the .suo file had become corrupt. Once I knew that, the trick was finding the .suo file:

  1. From the directory containing your solution file (.sln), open the folder named “.vs”.
  2. In the “.vs” folder, open the folder that has a name matching your solution name.
  3. Inside the solution folder, there may be multiple folders, one for each version of Visual Studio
    1. v14 is for Visual Studio 2015
    2. v15 is for Visual Studio 2017

These folders will contain your .suo file, which is hidden by default in Windows, so you need to enable “Show hidden files, folders, and drives” in your Folder options in order to see it. For instructions on that (Win 7,8, or 10), see the following article: https://www.howtogeek.com/howto/windows-vista/show-hidden-files-and-folders-in-windows-vista/

I still haven’t found a solution to my second issue (I will definitely write about it if I find one).

What the SUO (Solution User Options) file controls

After solving my problem, I decided to take a look at the responsibilities of the .suo file. Microsoft’s documentation (VS 2015 version – 2017 isn’t available at the time of this writing) isn’t very forthcoming in detailing what exactly the SUO is doing. Based on digging around on the web, it seems that the following are its responsibilities (among others):

  • Remembers last open files
  • Remembers breakpoints
  • Remembers expanded nodes in solution explorer
  • Remembers startup project
  • Remembers last open tool windows and their positions
  • Remembers watch window contents

The file is encoded and not human-readable, so it’s not something you can simply hack around with like you can a solution (.sln) or project (.xxproj) file. It should not be added to version control.

How to Fix credential validation issue on Azure WebJob renewal of Let’s Encrypt Certificate

A while back, I posted about setting up SSL encryption for free with Azure and Let’s Encrypt: Let’s Encrypt + Azure = Win!

This has been working smoothly for me since I set it up, but I noticed that errors started popping up in the log recently. Here is part of the stack trace:

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException: AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided. Trace ID: 958b11ab-839d-4a8d-97e6-fad1c3df0300 Correlation ID: e3f7c035-8978-4aa2-b01a-5c8fc74661ac Timestamp: 2017-05-31 14:14:26Z —> System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at Microsoft.IdentityModel.Clients.ActiveDirectory.HttpWebRequestWrapper.

It turns out that the API Key I had setup for my application registration had expired. I had to create a new key with no expiration and then update my Web Applications’ settings with the new Client secret. The exact steps I took are listed below:

  1. Login to Azure
  2. Navigate to “App Registrations”
  3. Choose the Registration you need to update
  4. Click the “settings” icon (or “All Settings” button)
  5. Choose “Keys” under API Access
  6. Type a description into the new row, choose “Never” under the duration drop down and then hit “Save” above.
  7. Once saved, copy the value (it won’t be visible again if you don’t copy it now)
  8. (Optional) delete your old key
  9. Navigate to the Azure App Service that has the web job that registers your SSL certificate
  10. Choose “Application Settings” from the menu
  11. Scrolling down to where you have a setting titled something like “letsencrypt:ClientSecret” (assuming you did the setup as in the article linked at the top) and paste the value you copied into the second text box
  12. Click “Save” above

Once you’re done, the web job should work the next time it runs. For another explanation with some pictures of the process, check out this blog post here: Let’s Encrypt on Azure Web Apps – Key Expiration Issue.

Converting a Lead Acid Battery-Powered Lawn Mower to Use Lithium Batteries

About 7 years ago, I was in the market for a new lawn mower. Looking at all the options at the time, I decided to go with an electric 24v, 20 amp-hour lawn mower sold under a brand called Earthwise. Here is the lawn mower in all its glory, model 60120:

Earthwise 60120 electric lawn mower

Credit: Amazon.com

I loved this mower from the get-go. It was extremely quiet, could mow my entire 1/4″ acre lawn in a single charge, and didn’t require gas, oil, spark plugs, etc. The only maintenance to do was charge the battery and sharpen the blade.

That entire first season was great, but it wouldn’t hold a charge for nearly as long the second season. I started having to charge it two times to finish my lawn by the fall. The third season was even worse. It wouldn’t hold a charge for more than a few minutes.

Opening the battery compartment on the lawn mower. 2, 12 volt batteries are wired in series to produce the 24 volts that power the mower.

I knew the batteries needed to be replaced, but I had no idea how much they would cost. I think I paid something like $150 for a replacement set, which is a pretty steep price. A few years later, those batteries were dead too. I gave up on it and bought a cheap used gas mower last year, but I hated using it. The pull starter was finicky, it would occasionally expel clouds of black smoke, and I would forget to buy gas for it from time to time.

I decided to look around and see if other people had found solutions, and sure enough they had. With the proliferation of lithium-ion batteries, it isn’t hard to find the batteries needed or to perform the upgrade.

What I needed to Do

In a nutshell, the task was simple. I had to do the following:

  • Buy lithium batteries to replace the lead-acid batteries
  • Cut the ends off of the black and white wires coming from the top of the battery case.
  • Solder new connectors to the black and white wires (whatever connectors matched the batteries I would buy)

It’s really that simple – just a few tasks and I would be on my way. A little research was required to figure out what batteries to buy, however.

Lithium Polymer (aka Li-Po, LiPo, or Li-Poly) Batteries

Lithium Polymer is a bit of a misnomer, since Lithium Polymer batteries are technically just lithium-ion batteries in a polymer casing (check out this excellent article for a good explanation on the difference between lithium-ion and lithium-polymer: Lithium Polymer vs Lithium-Ion batteries: What’s the deal?), but they came highly recommended as the battery of choice for this project. These batteries are being used all over the hobby world today, with drones leading the way. Lithium Polymer batteries are also used in many computers and cellphones.

Lithium Polymer batteries have a few important pieces of information written on them:

  • Voltage: You need a voltage that closely matches the mower. Since my lawn mower’s voltage is 24 volts, a 22.2v li-po battery is the best fit. Lithium polymer cells have a “nominal” voltage of 3.7v. Lithium polymer battery voltages are just multiples of 3.7v because they run multiple cells together to form a single battery. Therefore a 22.2v battery is really made up of 6 3.7v cells. Nominal voltage means the mid-range voltage because the cells run at 4.2v when fully charged and 3.2v when fully discharged. That means a 22.2v battery will output somewhere between 19.2v and 25.2v during the course of its run
  • Number of cells: Batteries will often have something like “6S” or “3S” printed on them. This corresponds to the number of cells in the battery. 6S = 6 cells = 22.2v. 3S = 3 cells = 11.1v.
  • Capacity/Runtime/Amp hours: Runtime is measured in mAh aka milliamp hours. A battery that has 5000 mAh has a runtime of 5 amp hours. Considering my mower had 20 amp hours, I want my batteries to try and match that if I want the same amount of runtime.
  • “C” Rating/Capacity Rating/Discharge rating: Batteries also list a C rating, which is used to determine the maximum load that a battery can safely sustain. 1C = the capacity of the battery. Therefore if a battery has 5 amp hours/5000mAh, 1C = 5amps. If a battery’s C rating is 40C, then the max is 200amps.

All of this is explained in much greater detail by this excellent article: A Guide to Understanding LiPo Batteries

Based on all this information, I knew I needed 22.2v batteries, and I wanted to get somewhere around 20 amp hours. I read from another resource that 20C was sufficient for others who did this project, so I figured I could do that or above. Looking online, I found the batteries to be fairly expensive. I settled on 2 pairs of these batteries (sold as 2 each): https://www.amazon.com/gp/product/B01AW7CKLW/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1 (22.2v, 4500mAh, 6S, 45C, Deans connector). Note that it says they come with XT-60 connectors, but the picture shows Deans connectors, which is what I received.

Deans Connectors

Deans connectors are apparently very common in the hobby world. I bought a pack of male plugs and a few splitters:

Soldering the ends was a little bit tricky as the connectors from the battery were fairly thick. I eventually got it right though, and the connections work fine.

Other Considerations

Charging the Batteries

You also need a charger for these batteries. Unfortunately, you have to charge them one at a time, so if you want 4 batteries like I have, you either want a multi-battery charger or you have to be a little patient.

Knowing when to Charge

It’s a good idea to get low-voltage indicators: https://www.amazon.com/gp/product/B003Y6E6IE/ref=od_aui_detailpages01?ie=UTF8&psc=1. If you put these on your batteries when you use them, they make a rather annoying sound when the voltage drops to the low threshold. This is important because your mower’s meter isn’t going to tell you when your charge is low. If you push a li-po battery too much, you can cause damage to the battery or it could explode. These things are loud enough that I can hear them while running the lawn mower.

Safely Storing and Transporting

Li-Pos are very flammable and difficult to put out. It is advised to buy a (relatively cheap) fireproof bag for storage and charging and that you charge using the “Storage” setting when you aren’t going to use them for a week or more. You should store them at room temperatures and it is advised you are present while charging due to the fire hazards. The fireproof bag I purchased is here: https://www.amazon.com/gp/product/B01H4QCZ4G/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

Conclusion

The mower now holds a charge that is easily long enough to mow my entire lawn again. Li-Po batteries are supposed to last for 200-300 charges in good conditions, so I’m hoping to get several years out of this setup. Charging is a bit of a pain, but I tend to mow on the weekends, so I’m usually around long enough to charge all 4 of them (It takes a couple of hours to fully charge each battery).

The mower has enough power to mow at most of the height settings, but it struggles to mow at the lowest levels. This is fairly consistent with how the lead acid batteries performed as well – there just isn’t enough output to chew up thick grass that is significantly lower than the current height.

It turned out that doing this conversion was fairly easy, but not particularly cheap. All told, I bought the following:

  • 2 sets of 2 x 22.2v 4500mAh, 45C batteries – $110 each set ($220 total)
  • 2 battery low-voltage indicators – $5 each ($10 total)
  • Li-Po battery charger/balancer – $55
  • Fireproof bag (holds 4 batteries, came with 2 more low-voltage indicators) – $15

Together, that’s $300, which could buy a decent gas mower. However, I’m a nerd so I enjoyed the project.

Resources:

Configuring a Fraud Detection Whitelist on Office 365 / Exchange 365

I’ve been setup with Office 365 for around a year, and I’m still discovering little things to tweak and optimize. One such thing I ran across today was a little message in some emails that were generated by an on-premises web server:

This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing

While the link provided by Microsoft about spoofing describes spoofing in detail, it doesn’t say anything about when you know something isn’t fraudulent and want to prevent it from flagging Exchange. After doing a little digging (I started by looking for some kind of a whitelist or whitelisting certain IP addresses on Exchange/Office 365), I came across a very helpful article: This Sender Failed Our Fraud Detection Checks and May Not Be Who They Appear to Be.

In a nutshell, the problem is that the email headers specify an origination IP address (our web server’s address) that isn’t allowed as part of Exchange’s SPF (Sender Policy Framework) configuration. The SPF configuration will examine an email’s domain and ensure that the domain matches an allowed list, to prevent fraudulent sending. After all, it’s pretty damn easy to spoof an email address using software.

SPF filters are added to your DNS records, and are pretty easy to update. To that end, I logged into my DNS provider and took a look at the records for my domain. There, I found a TXT record that was setup when I initially configured Office 365. This record had the following value:

v=spf1 include:spf.protection.outlook.com -all

In order to add my web server’s address to this record and thus resolve my issue, the line simply needed to be modified as such:

v=spf1 ip4:xxx.xxx.xxx.xxx include:spf.protection.outlook.com -all

xxx.xxx.xxx.xxx is, of course, the IP address you want to “whitelist.”

Once the old DNS record expires (I have a TTL of 1 hour on this record), the new configuration should take effect and your messages will no longer be destined for your Junk Email folder.

EntityFramework – Grouping by Date Ranges

If you’ve ever created an outstanding balance report or other report that deals with aggregating data into date ranges, you’ll know that it isn’t immediately obvious how to structure your query, whether using SQL or LINQ (at least, it wasn’t to me).

My initial thought was to run multiple queries (one for each time range) and munge the results together. However, an elegant solution is to use SQL’s CASE expression to group date ranges together.

Let’s say you wanted a report that summed the amount of unpaid invoices in 20 day groupings (0-19 days past due, 20-39 past due, 40+). You could write something like this:

SELECT DaysSinceDueRange, SUM(Amount)
FROM (SELECT CASE WHEN GETDATE() - DueDate < 20 THEN 0
		  WHEN GETDATE() - DueDate BETWEEN 20 AND 39 THEN 20
		  WHEN GETDATE() - DueDate > 39 THEN 40
             END AS DaysSinceDueRange,
             Amount
       FROM Invoices
       WHERE Unpaid=1) inv
GROUP BY DaysSinceDueRange

This is really elegant, but then the question becomes how to do this with an ORM like EntityFramework. There are a couple of tricks required here:

  1. To do the date comparisons, EntityFramework requires the usage of System.Data.Entity.DbFunctions.DiffDays method (in EF 6 – it used to be in System.Data.Objects.EntityFunctions). If you try to do something like (DateTime.Now – invoice.DueDate).TotalDays, you’ll get an exception “DbArithmeticExpression arguments must have a numeric common type” because the subtraction operator is not defined for Dates in SqlServer.
  2. To do CASE / WHEN / THEN / END in EntityFramework, you have to make use of a lot of ternary operators. It can be kind of ugly, but if you write your code well enough, it should be fairly readable (or at least as readable as the SQL expression).

Here is an example of the SQL above translated into LINQ:

Context.Set<Invoice>()
       .Where(inv => inv.Unpaid)
       .Select(inv => new
       {
           DaysSinceDueRange = DbFunctions.DiffDays(inv.DueDate, DateTime.Today) < 20 ? 0 :
                               DbFunctions.DiffDays(inv.DueDate, DateTime.Today) >= 20 && 
                                   DbFunctions.DiffDays(inv.DueDate, DateTime.Today) < 40 ? 20 : 
                               40,
           Amount = inv.Amount
       }).GroupBy(inv => inv.DaysSinceDueRange)
       .Select(g => new
       {
           DaysSinceDueRange = g.Key.DaysSinceDueRange,
           Amount = g.Sum(inv => inv.Amount)
       });

Of course, you can get more complicated in a hurry, but I think this is a pretty elegant way to handle grouping data by date ranges.

Project Fi is the way to go if you are a low cellular data user

After years of being an AT&T mobile customer, dating back to the Cingular days, I finally made the jump to Google’s Project Fi last December. All in all, the service has been very good, and the savings have been ridiculous. AT&T and Verizon have recently rolled out unlimited data plans, so the pricing is a little different than the plan I was on, but it’s not too dissimilar from what I had. Note that I do not recommend project Fi if you’re a really heavy cellular data user (> 6GB total) because they charge $10/GB. If you’re using a lot of data, that will add up fast. However, the problem with most plans is they don’t give you anything if you don’t use that data (maybe you get rollover data, but I’d rather have money). Project Fi pays you back for what you don’t use.

With AT&T, My wife, my sister, and I had a family share plan with 6GB of data. The total for this plan was right around $210/mo, and that was with a corporate discount applied. All of us had new-ish smartphones (my wife and sister had iPhones, myself an android), and were coming up on the end of our 2-year contracts.

During my attendance of That Conference last year, I heard someone tell me about Project Fi and how little it cost. I started looking into it and when my phone (LG G3) turned itself into an unbootable brick one day in September, I decided to buy a Nexus 6P as a replacement. Not only was the phone reasonably priced, but was one of the very limited selection of phones that work on Project Fi. It took a little convincing of my wife to move from her iPhone to Android, but when they announced the Pixel, she agreed to make the move.

Cell Coverage and Quality

A few months in, I can tell you the service, at least where I live in Madison, Wisconsin, has been very good. I haven’t had many instances where I couldn’t get a signal. From the Project Fi FAQ:

Project Fi has partnered with Sprint, T-Mobile, and U.S. Cellular, three of the leading carriers in the US, to provide our service.

They also provide a link to a coverage map: https://fi.google.com/coverage

Project Fi also tries to utilize WI-FI calling when there is low cell quality. I have found this to be a bit of a mixed bag – I sometimes don’t get a dialtone or the phone doesn’t indicate that a call is going out until, suddenly, someone picks up.

The only places where I’ve noticed signal quality problems so far have been inside airports. In particular, it was difficult to get a signal at O’hare. Other people I was travelling with who had Verizon received a better signal.

I have also noticed that sometimes SMS messages won’t come through unless I enable cellular data (and yes, I have verified this even when messages are not MMS). My wife hasn’t had the same problem on her pixel, so mine could be a hardware issue or something specifically related to the Nexus 6P.

Costs

Getting back to the costs, our bill comes in at around $45/month. For two people. Previously, it was costing about $140/month for two people. That’s almost $100/mo, we’re saving. Because Fi reimburses you for unused data, it incentivizes us to use less than the 2GB we pay for. I even have a little widget on my phone that shows how much data I use, and it really encourages me to think about how I’m using data.

Here is a breakdown of the charges from my last bill:

Last month’s usage (for Feb 2 – Mar 2)
Unused data Credit for 1.666 GB at $10/GB -$16.65
Next month’s charges (for Mar 2 – Apr 2)
Fi Basics 2 people, $20 + $15/member $35.00
Prepaid data 2 GB at $10/GB $20.00
Taxes & regulatory fees $6.12

Total: $44.47

International Calling

One of the other great benefits of Project Fi is the international calling aspect of the plan. Data is still $10/GB in 135 countries. From their FAQ:

Project Fi offers high speed data in over 135+ countries and destinations for the same $10/GB you pay in the U.S. For a complete breakdown of specific countries please check our International Rates.

Further:

Unlimited international texts are included in your plan. If you’re using cell coverage, calls cost 20¢ per minute. If you’re calling over Wi-Fi, per-minute costs vary based on which country you’re calling and you’re charged only for outbound calls. Please check our international rates for more information.

I haven’t had a chance to try it out, but I love this part of the plan. When I traveled to Belize last year, we paid $20 for a SIM card and calls there are generally very expensive. Data is incredibly expensive. Had I had Project Fi at the time, those charges would have been very minimal (coverage is another issue altogether, but at least when you have coverage, usage comes at a reasonable rate).

Summary

So, in summary, if you’re a relatively low cellular data user and don’t mind having Google phones, this plan is a great value. I’m looking at saving nearly $1200 this year because of it. I can think of a lot better things to do with my money than spend it on cellular service.

EntityFramework Performance and IEnumerable vs IQueryable

Working in the .Net world, you get pretty used to dealing with IEnumerable collections. However, you have to be aware of performance issues that can arise when using them with EntityFramework. Sometimes I forget about IQueryable because LINQ to Entities obfuscates much of the difference of retrieving objects from a database vs dealing with an in-memory collection and IQueryable is pretty specific to dealing with querying a database.

When using the Repository pattern, one of the things I love to do is add a flexible “Find” method to the repository. Below is an example:

public partial class Order
{
    public int ID { get; set; }
    public string CustomerItemNumber { get; set; }
    public DateTime? ShipDate { get; set; }
    public int OrderTypeID { get; set; }
}

public class FindOrdersReqeust
{
    public IEnumerable<int> OrderIDs { get;set; }
    public IEnumerable<string> CustomerOrderNumbers { get; set; }
    public IEnumerable<int> OrderTypeIDs { get; set; }
    public bool? HasShipDate { get; set; }
    public DateTime? ShipDateBefore { get; set; }
    public DateTime? ShipDateAfter { get; set; }
    
    public FindOrdersReqeust()
    {
        OrderIDs = new List<int>();
        CustomerOrderNumbers = new List<string>();
        OrderTypeIDs = new List<int>();
    }
}

//I would normally implement an interface here, but for the sake of brevity am excluding from this example
public class OrderRepository
{
    private IDbContext context;
    public OrderRepository(IDbContext context)
    {
        this.context = context;
    }

    public IEnumerable<Order> Find(FindOrdersRequest request)
    {
        IEnumerable<Order> orders = context.Set<Order>().AsEnumerable();
        if(request.OrderIDs.Any())
        {
            orders = orders.Where(o => request.OrderIDs.Contains(o.ID));
        }
        if(request.CustomerOrderNumbers.Any())
        {
            orders = orders.Where(o => request.CustomerOrderNumbers.Any(x => o.CustomerOrderNumber.Equals(x)));
        }
        if(request.OrderTypeIDs.Any())
        {
            orders = orders.Where(o => request.OrderTypeIDs.Contains(o.OrderTypeID));
        } 
        if(request.ShipDateAfter.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate >= request.ShipDateAfter);
        }
        if (request.ShipDateBefore.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate <= request.ShipDateBefore);
        }
        if(request.HasShipDate.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue == reqeust.HasShipDate.Value);
        }

        return orders;
    }
}

public class OrderService
{
    private OrderRepository orderRepository;
    public OrderService(OrderRepository orderRepository)
    {
        this.orderRepository = orderRepository;
    }

    public void GetUnshippedOrders()
    {
        return orderRepository.Find(new FindOrdersRequest()
        {
            HasShipDate = false
        }).ToList();
    }
}

The major problem with this code is highlighted in the example above – namely that AsEnumerable() call on the context.Set<T> will enumerate every row from the database in full. The Sql generated will be equivalent to a SELECT *, and will probably look something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate]
    FROM [dbo].[Orders] AS [Extent1]

Now, you might not notice if you have 10 rows, but if you have 1,000,000, you’ll notice as your application burns to the ground and consumes all the memory on whatever server it’s running on.

So, an easy fix is to change that one line to IQueryable / AsQueryable() like so:

IQueryable<Order> orders = context.Set<Order>().AsQueryable();

Now we get the benefits of deferred execution until ToList is called on the results of the Find method from OrderRespository. The SQL generated will now be something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate],
    [Extent1].[OrderTypeID] AS [OrderTypeID]
    FROM [dbo].[Orders] AS [Extent1]
    WHERE [Extent1].[ShipDate] IS NULL

This is a huge improvement already, but it can be much better than this. In the OrderRepository class, if we also make the return type IQueryable, we can then further query the database before pulling the results into memory.

public IQueryable<Orders> Find(FindOrdersRequest request)
{
    //the rest of the method remains the same
}

This distinction is important, and I will provide an example.

After I add this “Find” functionality to my repositories, I tend to build reports using those methods, which frequently utilize .GroupBy() after filtering. If we were to leave the Find method as returning an IEnumerable<Order> collection, we would find that the SQL generated would not be what we wanted.

For example, let’s say I now wanted a report that showed the number of shipped and unshipped orders by OrderTypeID. I would add a method to the OrderService as such:

//assume that an object ReportItem exists with properties as defined below
public IEnumerable<ReportItem> GetUnshippedOrdersByTypeReport()
{
    var report = new List<ReportItem>();
    var results = orderRepository.Find(new FindOrdersRequest(){ HasShipDate = false })
                                 .GroupBy(order => new 
                                 {
                                     OrderTypeID: order.OrderTypeID
                                 })
                                 .Select(g => new
                                 {
                                     OrderTypeID: g.Key.ID,
                                     ShippedOrderCount: g.Sum(x => x.ShipDate.HasValue),
                                     UnshippedOrderCount: g.Sum(x => !x.ShipDate.HasValue)   
                                 });
    foreach(var result in results)
    {
        report.Add(new ReportItem()
        {
            OrderTypeID: result.OrderTypeID,
            ShippedOrderCount: result.ShippedOrderCount,
            UnshippedOrderCount: result.UnshippedOrderCount,
        });
    }
   
    return report;
}

With this code, the benefits of using IQueryable in the repository are clear over IEnumerable.

If we leave the repository with IEnumerable, the SQL generated will be done in two phases:

  1. The filtering of the “FindOrdersRequest” will be executed as the SQL statement above and the results will be stored into a temporary IEnumerable collection
  2. The Group By operation will operate on this temporary collection. More trips to the database will be taken if any navigation properties are referenced in the GroupBy (there are none in this example)

If we change the repository to use IQueryable, the SQL generated will be done in a single, neat statement. It will filter and perform the group by at once, resulting in much better performance. Another performance benefit is that we are projecting specific columns and not populating an entire Order object with every field. For this trivial example, it doesn’t make much of a difference, but if you’re dealing with tables/objects that have many columns/properties, you will notice. If you’re deploying to a cloud-based environment, you know that compute time and efficiency matter a lot, so following best practices for performance will help you out a lot in that respect.

 

How to Add and Manage Outlook Rules/Filters for Office 365 Shared Mailboxes

It isn’t obvious, but you can setup and manage rules for shared mailboxes in Office 365 just as you do for users’ mailboxes. It isn’t obvious how to it is because you can’t administer these rules through the desktop client (or any other client) like you can with user mailboxes, and the settings for managing these rules doesn’t even appear to be available from the Office 365 or Exchange administration panel. Here are the steps to take:

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Enter the following Url into your browser to get to the shared mailbox options page: https://outlook.office365.com/ecp/<email address>, where <email address> is the email address of the shared mailbox with the rules you want to manage.
    • For example, if your email address was info@contoso.com, you would enter the address https://outlook.office365.com/ecp/info@contoso.com
  • Click the “Organize email” section on the left menu

This method also works for editing user mailbox rules, provided you have access.

Note: As of this writing, Internet Explorer is the only browser I have tried that successfully adds a rule that involves a sent from or sent to criteria. Chrome and Firefox both give a CORS error because selecting people tries to open the contacts for the account, and that application is part of a different domain. The message I receive from Firefox is:

Load denied by 08:59:07.114 Load denied by X-Frame-Options: https://outlook.office.com/owa/#viewmodel=OwaOptionRichPeoplePickerViewModelFactory does not permit cross-origin framing.

Managing Global Rules

If you simply want to edit global rules that affects all mail flowing to/from your organization, you can follow the steps below:

Getting to the Exchange Admin Center

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Open the Admin Center by clicking the “Admin” button
  • When the Admin Center opens, click the “Admin Centers” link on the left-side menu and choose “Exchange”

Managing Rules

  • From the Exchange Admin Center, there is a “Rules” link under the mail flow section. Remember, this area only allows you perform a limited set of actions, that does NOT include moving a message to a specific folder, as these rules are at a global level. The actions you can perform here are:
    • Forward the message for approval…
    • Redirect the message to…
    • Block the message…
    • Add recipients…
    • Apply a disclaimer to the message…
    • Modify the message properties…
    • Modify the message security…
    • Prepend the subject of the message with…
    • Generate incident report and send it to…
    • Notify the recipient with a message…

Windows Server Backup Trials and Tribulations

I know I’m not the first person to say this (2 people advised me of this today), but don’t use Windows Server Backup. Get a real backup solution and save yourself some serious headaches. I know I will be after my latest fun with WSB. *Caveat – my experience is with W2K8r2 servers, so perhaps it’s been improved in newer iterations.

I’ve been managing server hardware for a couple of small businesses for about 10 years now, and I’ve always just used Windows Server Backup (in conjunction with Azure backup, as of a couple of years ago) to backup said servers. Today was one of those moments where you have fleeting thoughts of “Am I going to get fired? Did I just lose all of the data for this organization?” I’d like to believe that I have better safeguards in place than to allow that to happen (and it turns out, I was able to get it working again), but man does it scare the crap out of you when you can’t get things working.

Considering my server is in a RAID 5 configuration, you might think “What is the problem? Swap out the failed drive and move on.” That’s what I normally do when I encounter a degraded virtual disk, but this time it didn’t work because a second disk failed during the rebuild process (different from the disk that caused the degradation in the first place). The system was fine until a several-hours-long power outage depleted my battery backup’s power, causing the system to go down in the dead of night. As a side effect, my virtual disk was degraded, and two of the physical disks comprising the virtual drive reported problems. The virtual disk kept failing to rebuild, and pulling out the drive that was keeping it from rebuilding would rendered the machine useless, since it wasn’t the hot spare.

I woke up extra early the next morning to come in and restore the backup image that was taken a few hours before that. The goal was to replace all of the failed physical disks, delete the virtual disk and create a new one with the same parameters as the old, and then restore the image – all before business started that day.

I guess I forgot how much of a PITA Windows Server Backup was to use. Here are the instructions I left myself from the last time I had a multi-disk failure:

  • A Windows System Image backup must be performed before attempting to restore.
    • Verify this before you nuke your virtual disk
  • Since the server has a Dell PERC s300 RAID controller, Windows needs the appropriate drivers to work with the RAID controller when restoring the image. Go to the Dell Support website and enter the Service Tag for the server.
    • Find the Windows Server 2008 R2 RAID / PERC s300 driver in the list of available drivers.
    • Download the “Hard Drive” format file (an EXE) and run it on the computer where you downloaded it (running the file will extract the drivers to a folder of your choosing)
      • Doing so will extract the drivers to a file on the computer where the EXE was run
    • Place the folder of drivers from the previous step onto an external drive (or burn to CD)
  • Check the settings of your RAID configuration – verify size and caching options so you can use them for your new image
  • Power off the server
  • Swap out any bad hardware for new drives
  • Restart the server and enter the RAID configuration menu (Ctrl + R).
    • Find the virtual disk and delete it (This can be also be done in OpenManage Server Administrator prior to rebooting).
    • Initialize any new physical disks for use in the virtual disk.
    • Create a new virtual disk, matching the configuration to what it was previously (usually all available space).
    • Make sure to swap the virtual disk into virtual disk slot 1 (the only bootable one), if you have multiple virtual disks in your array.
  • Install the Windows Server OS disc into the DVD tray
    • Continue to boot
  • When Windows Setup loads, choose the language and hit the next arrow.
  • Choose the “Repair an installation” link.
  • Click the “Load Drivers” button
  • Plug the external drive with the downloaded drivers into the server and choose the optical drive when Windows prompts you for the driver location.
  • Plug the external drive with the backup into the server
  • Choose the Restore from System image option and hit next
  • Windows Server Backup should now detect any backups from the external drive
    • Note, this can take a reallllly long time (> 30 minutes)
  • Choose next and Finish. The image should restore after a few hours.

One would think that with these detailed instructions, it should be fairly easy to restore the image. The answer is no. For one thing, Windows Recovery seemed to really struggle to recognize my external backup drive. This is especially disconcerting and leads you down some incorrect paths (starting to think the data is corrupted and trying to fix problems with the disk). Eventually, I got it to work through trial and error, but it took far too long and interrupted business operations far longer than it should have. Some problems I encountered:

  • The “Load Drivers” step is essential if you are restoring your image over an existing drive (and not starting from scratch). It may also be essential if you are starting from scratch – it never worked for me without loading those drivers, so I think it’s a good idea anyway (assuming you’re working with a RAID controller)
  • Feedback with the System Image Recovery is extremely poor. You have no idea what is going on most of the time. You are simply stuck with progress bars that never end. You really just have to wait and hope that it completes.
  • My first attempt at creating the virtual disk resulted in a just slightly smaller size than “all available space”, so during the image restore process, I received the helpful error “A data disk is currently set as active in BIOS. Set some other disk as active or use the DiskPart utility to clean the data disk, and then retry the restore operation. (0x80042406).” I opened up the diskpart utility and cleaned the data disk, but as I suspected, the problem was really something else. The disk size has to match (or exceed) the cloned drive
  • At one point, I loaded Windows as a fresh install and tried to restore from Windows Server Backup within the OS. The only problem with that is that you can’t do a bare metal restore – you can only restore files and drives. While this was better than nothing, I didn’t like the idea of trying to figure out how to reconfigure all the software on that machine. I’m glad I stuck with the image restore.
    • Windows Server Backup has always been incredibly slow, as well. For the most part, I use the command line when running backups with it, but the user interface is ungodly slow. Just opening it and getting to the initial view is really laggy.
  • It seemed like it mattered when I plugged my external drive in. If I had the drive plugged in when I started Windows Recovery, the image restore never found it. But when I plugged it in after loading the drivers, it seemed to be okay

In general, unresponsive software is a huge pet peeve of mine. Let me know that something is going on. I don’t know whether something is hanging or just takes forever. Also, no software should be this picky – especially something involved in a potentially mission critical application.

If you’ve encountered problems trying to restore your Win2K8r2 server and you’re using RAID-5, these steps might help. As I discovered today, the internet is full of reports of problems restoring from WSB (most posts are older because Server 2008 R2 is pretty old now, in OS years).

This episode did help me reflect on some problems though – namely, that I need to have more resilient processes in place for problems like this. For example, what if the motherboard failed on this server? What do I do then? Have Dell overnight a motherboard for this? Redundancy is non-existent here. Furthermore, this is an area where you need to practice – I’m sure if I had done restores more than 2-3 times in my entire life, it wouldn’t have been so bad. Developing some kind of a failure scenario like the infamous “Chaos Monkey” would help make things much more resilient. At the very least, better documentation on the services and applications installed on the server now will help in the event of a catastrophic loss or even just a migration.

As a developer, though, the correct answer is really to just virtualize all the things so I don’t have to worry about disk failures anymore 🙂

Asp.Net Core appsettings tips

I’ve recently had the opportunity to work on a new project where I was able to use Asp.Net Core for the first time. Well, not completely – I’ve contributed to an open source project that has been using .Net Core for some time, when it was called DNX or ASPNET 5. Anyway, the work I did there really was focused on writing code for the application, not configuring the infrastructure.

A lot has changed, but the changes are largely for the better. There are a few things that tripped me up, so I figured I’d write about them here.

AppSettings have gone JSON

This in and of itself isn’t much of a revelation, but I, for one, am glad to have JSON configuration over XML. In the Startup.cs file, appsettings are configured by default as such:

public Startup(IHostingEnvironment env)
{
    var builder = new ConfigurationBuilder()
        .SetBasePath(env.ContentRootPath)
        .AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
        .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true)
        .AddEnvironmentVariables();
    Configuration = builder.Build();
}

Just like before, when we had Web.config, Web.Release.Config, Web.{EnvironmentName}.config, any environment configuration will be applied on top of the rules defined appsettings.json file. So, if you have an appsettings.json file that looks like:

{
    "MyVariable1": "value1",
    "MyVariable2": "value2"
}

and then you define a file appsettings.production.json that looks like:

{
    "MyVariable1": "productionValue",
}

The production file’s value will be used for MyVariable1 when the application is running in a production environment, as expected.

Accessing appsettings

The easiest way to access a value from your appsettings file is to use Configuration.GetValue:

Configuration.GetValue("MyVariable1", "");

The above will retrieve the value for MyVariable1 or an empty string if there is no key found for MyVariable. The nice thing is you don’t get an exception if a key isn’t found, but this could be an issue if you were expecting a key and get the default value instead.

If your appsettings file has nested objects like this:

{
    "Logging": {
        "IncludeScopes": false,
        "LogLevel": {
            "Default": "Debug",
            "System": "Information",
            "Microsoft": "Information"
        }
    }
}

you can retrieve values by using Configuration.GetValue(“Logging:LogLevel:Default”);

Personally, I don’t like to use magic strings – I prefer to use a strongly typed configuration.

Strongly Typed appsettings

Rick Strahl has a very good article about Strongly typed appsettings, but I will cover the basics. In a nutshell, you need to do two steps to make this work:

  1. Create a class that has all of the corresponding properties of your appsettings (or just a subsection of your appsettings, as I will show below)
  2. Wire up your class by calling the services.Configure<T> method In the ConfigureServices method of your Startup.cs class

Let’s use the following appsettings.json file as an example:

{
    "MySettings" : {
        "AdminEmail" : "admin@email.com",
        "ErrorPath" : "/Home/Error"
    }
}

All we need to complete step 1 is to have a corresponding class for these settings. Here is the corresponding example:

public class MySettings
{
    public string AdminEmail { get; set; }
    public string ErrorPath { get; set; }
}

Now, in our Startup.cs class, we can add the following to our ConfigureServices method:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.Configure<MySettings>(Configuration.GetSection($"{nameof(MySettings)}"));
}

That’s it. Now, we can simply inject MySettings into our MVC/WebAPI controller constructors and Web API will be able to inject that dependency for us.

Note that in this example we called Configuration.GetSection and gave it the name of our section/class – if you only listed the keys AdminEmail and ErrorPath at the root of the appsettings file (without any nested objects), you could have done the same by calling just services.Configure<MySettings>(Configuration);

Using appsettings in your Startup.cs class

One gotcha that had me stumped for a little while was trying to use some of my appsettings configurations to provide configurations in my Startup.cs class. The trick here is using the Bind method on configuration. Here is a good example of what I mean: a lot of tutorials and examples will show configuring exception handling as:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    app.UseExceptionHandler("/Home/Error");
}

I like to make that route configurable in my appsettings, so here is how to do that:

public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
    var mySettings = new MySettings();
    Configuration.GetSection($"{nameof(MySettings)}").Bind(mySettings);

    app.UseExceptionHandler(mySettings.ErrorPath);
}