Connecting to Synology DiskStation from Windows 10

I’ve owned a small Synology Diskstation for a few years and really love its features and capabilities, especially considering its cost. One of the primary roles of my DiskStation is to backup my home computers. After recently purchasing a Surface Pro 4 and applying the Creators’ update, I was having trouble connecting to my DiskStation running DiskStation 6.1. After reading quite a few posts online about different problems, it seems the solution I needed was really quite basic.

What was most curious about this problem was that I could not see my DiskStation appear under “Network” under Windows explorer, and I received error code 53 (system error 53 has occurred. The network path was not found) when I tried to map the network drive using the command prompt like so:

net use T: \\DiskStation

Performing nbtstat -c from the command line and net view both listed my DiskStation with the UNC I was expecting, and the correct IP (I have mine configured as a static) in the case of the nbtstat command.

First, I made sure the SMB settings on the DiskStation were set to allow from SMB 1.0 to SMB 3.0 (DiskStation Control Panel -> File Services -> SMB -> Advanced Settings -> Maximum/Minimum SMB Protocol Settings).

Then, in Windows, if I opened Explorer and navigated to the IP address or the UNC sharename (\\DiskStation, for example), it would prompt me for a password. This was the primary point of failure for me earlier – I had forgotten that when logging into another server, whether it’s a Synology DiskStation or a Windows Server, you have to provide the server name AND the account name (or domain name/Account name, in the even that you’re connected to a domain).

So the Username wasn’t just “MyUsername” it was “DiskStation\MyUserName”. Once I did that, my DiskStation appeared under Network.

Killing all connections to a SQL Server Database

One issue I’ve run across frequently during development is restoring a database to a newer state. Often, when I want perform the restore, there are active connections to my development database, so restoring will fail.

Of course, StackOverflow had the answer to this, but I’ve searched for the solution to this problem enough times where it made sense for me to finally write it down.

Script to kill all connections to a database (More than RESTRICTED_USER ROLLBACK)

User AlexK posted this excellent solution:

For MS SQL Server 2012 and above

USE [master];

DECLARE @kill varchar(8000) = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'  
FROM sys.dm_exec_sessions
WHERE database_id  = db_id('MyDB')

EXEC(@kill);

For MS SQL Server 2000, 2005, 2008

USE master;

DECLARE @kill varchar(8000); SET @kill = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), spid) + ';'  
FROM master..sysprocesses  
WHERE dbid = db_id('MyDB')

EXEC(@kill); 

 

Fix: Visual Studio doesn’t remember last open documents

After installing Visual Studio 2017 a few months back, I noticed that some projects were loading strangely, while others loaded just fine. The two main issues I experienced were:

  • Documents I had open on my previous run of VS wouldn’t load upon running the Visual Studio 2017 application
  • Windows I had arranged in my multi-monitor layout were not loading where I expected them

A quick Stack Overflow search led me to the answer regarding the first: the .suo file had become corrupt and needed to be deleted. Once I knew that, the trick was finding the .suo file:

  1. From the directory containing your solution file (.sln), open the folder named “.vs”.
  2. In the “.vs” folder, open the folder that has a name matching your solution name.
  3. Inside the solution folder, there may be multiple folders, one for each version of Visual Studio
    1. v14 is for Visual Studio 2015
    2. v15 is for Visual Studio 2017

These folders will contain your .suo file, which is hidden by default in Windows, so you need to enable “Show hidden files, folders, and drives” in your Folder options in order to see it. For instructions on that (Win 7,8, or 10), see the following article: https://www.howtogeek.com/howto/windows-vista/show-hidden-files-and-folders-in-windows-vista/

After you’ve found the .suo, go ahead and delete it (make sure Visual studio is not running with the solution open). A new one will be created for you when you open the solution file the next time.

I still haven’t found a solution to my second issue (I will definitely write about it if I find one).

What the SUO (Solution User Options) file controls

After solving my problem, I decided to take a look at the responsibilities of the .suo file. Microsoft’s documentation (VS 2015 version – 2017 isn’t available at the time of this writing) isn’t very forthcoming in detailing what exactly the SUO is doing. Based on digging around on the web, it seems that the following are its responsibilities (among others):

  • Remembers last open files
  • Remembers breakpoints
  • Remembers expanded nodes in solution explorer
  • Remembers startup project
  • Remembers last open tool windows and their positions
  • Remembers watch window contents

The file is encoded and not human-readable, so it’s not something you can simply hack around with like you can a solution (.sln) or project (.xxproj) file. It should not be added to version control.

How to Fix credential validation issue on Azure WebJob renewal of Let’s Encrypt Certificate

A while back, I posted about setting up SSL encryption for free with Azure and Let’s Encrypt: Let’s Encrypt + Azure = Win!

This has been working smoothly for me since I set it up, but I noticed that errors started popping up in the log recently. Here is part of the stack trace:

Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.IdentityModel.Clients.ActiveDirectory.AdalServiceException: AADSTS70002: Error validating credentials. AADSTS50012: Invalid client secret is provided. Trace ID: 958b11ab-839d-4a8d-97e6-fad1c3df0300 Correlation ID: e3f7c035-8978-4aa2-b01a-5c8fc74661ac Timestamp: 2017-05-31 14:14:26Z —> System.Net.WebException: The remote server returned an error: (401) Unauthorized. at System.Net.HttpWebRequest.GetResponse() at Microsoft.IdentityModel.Clients.ActiveDirectory.HttpWebRequestWrapper.

It turns out that the API Key I had setup for my application registration had expired. I had to create a new key with no expiration and then update my Web Applications’ settings with the new Client secret. The exact steps I took are listed below:

  1. Login to Azure
  2. Navigate to “App Registrations”
  3. Choose the Registration you need to update
  4. Click the “settings” icon (or “All Settings” button)
  5. Choose “Keys” under API Access
  6. Type a description into the new row, choose “Never” under the duration drop down and then hit “Save” above.
  7. Once saved, copy the value (it won’t be visible again if you don’t copy it now)
  8. (Optional) delete your old key
  9. Navigate to the Azure App Service that has the web job that registers your SSL certificate
  10. Choose “Application Settings” from the menu
  11. Scrolling down to where you have a setting titled something like “letsencrypt:ClientSecret” (assuming you did the setup as in the article linked at the top) and paste the value you copied into the second text box
  12. Click “Save” above

Once you’re done, the web job should work the next time it runs. For another explanation with some pictures of the process, check out this blog post here: Let’s Encrypt on Azure Web Apps – Key Expiration Issue.

Converting a Lead Acid Battery-Powered Lawn Mower to Use Lithium Batteries

About 7 years ago, I was in the market for a new lawn mower. Looking at all the options at the time, I decided to go with an electric 24v, 20 amp-hour lawn mower sold under a brand called Earthwise. Here is the lawn mower in all its glory, model 60120:

Earthwise 60120 electric lawn mower

Credit: Amazon.com

I loved this mower from the get-go. It was extremely quiet, could mow my entire 1/4″ acre lawn in a single charge, and didn’t require gas, oil, spark plugs, etc. The only maintenance to do was charge the battery and sharpen the blade.

That entire first season was great, but it wouldn’t hold a charge for nearly as long the second season. I started having to charge it two times to finish my lawn by the fall. The third season was even worse. It wouldn’t hold a charge for more than a few minutes.

Opening the battery compartment on the lawn mower. 2, 12 volt batteries are wired in series to produce the 24 volts that power the mower.

I knew the batteries needed to be replaced, but I had no idea how much they would cost. I think I paid something like $150 for a replacement set, which is a pretty steep price. A few years later, those batteries were dead too. I gave up on it and bought a cheap used gas mower last year, but I hated using it. The pull starter was finicky, it would occasionally expel clouds of black smoke, and I would forget to buy gas for it from time to time.

I decided to look around and see if other people had found solutions, and sure enough they had. With the proliferation of lithium-ion batteries, it isn’t hard to find the batteries needed or to perform the upgrade.

What I needed to Do

In a nutshell, the task was simple. I had to do the following:

  • Buy lithium batteries to replace the lead-acid batteries
  • Cut the ends off of the black and white wires coming from the top of the battery case.
  • Solder new connectors to the black and white wires (whatever connectors matched the batteries I would buy)

It’s really that simple – just a few tasks and I would be on my way. A little research was required to figure out what batteries to buy, however.

Lithium Polymer (aka Li-Po, LiPo, or Li-Poly) Batteries

Lithium Polymer is a bit of a misnomer, since Lithium Polymer batteries are technically just lithium-ion batteries in a polymer casing (check out this excellent article for a good explanation on the difference between lithium-ion and lithium-polymer: Lithium Polymer vs Lithium-Ion batteries: What’s the deal?), but they came highly recommended as the battery of choice for this project. These batteries are being used all over the hobby world today, with drones leading the way. Lithium Polymer batteries are also used in many computers and cellphones.

Lithium Polymer batteries have a few important pieces of information written on them:

  • Voltage: You need a voltage that closely matches the mower. Since my lawn mower’s voltage is 24 volts, a 22.2v li-po battery is the best fit. Lithium polymer cells have a “nominal” voltage of 3.7v. Lithium polymer battery voltages are just multiples of 3.7v because they run multiple cells together to form a single battery. Therefore a 22.2v battery is really made up of 6 3.7v cells. Nominal voltage means the mid-range voltage because the cells run at 4.2v when fully charged and 3.2v when fully discharged. That means a 22.2v battery will output somewhere between 19.2v and 25.2v during the course of its run
  • Number of cells: Batteries will often have something like “6S” or “3S” printed on them. This corresponds to the number of cells in the battery. 6S = 6 cells = 22.2v. 3S = 3 cells = 11.1v.
  • Capacity/Runtime/Amp hours: Runtime is measured in mAh aka milliamp hours. A battery that has 5000 mAh has a runtime of 5 amp hours. Considering my mower had 20 amp hours, I want my batteries to try and match that if I want the same amount of runtime.
  • “C” Rating/Capacity Rating/Discharge rating: Batteries also list a C rating, which is used to determine the maximum load that a battery can safely sustain. 1C = the capacity of the battery. Therefore if a battery has 5 amp hours/5000mAh, 1C = 5amps. If a battery’s C rating is 40C, then the max is 200amps.

All of this is explained in much greater detail by this excellent article: A Guide to Understanding LiPo Batteries

Based on all this information, I knew I needed 22.2v batteries, and I wanted to get somewhere around 20 amp hours. I read from another resource that 20C was sufficient for others who did this project, so I figured I could do that or above. Looking online, I found the batteries to be fairly expensive. I settled on 2 pairs of these batteries (sold as 2 each): https://www.amazon.com/gp/product/B01AW7CKLW/ref=oh_aui_detailpage_o01_s00?ie=UTF8&psc=1 (22.2v, 4500mAh, 6S, 45C, Deans connector). Note that it says they come with XT-60 connectors, but the picture shows Deans connectors, which is what I received.

Deans Connectors

Deans connectors are apparently very common in the hobby world. I bought a pack of male plugs and a few splitters:

Soldering the ends was a little bit tricky as the connectors from the battery were fairly thick. I eventually got it right though, and the connections work fine.

Other Considerations

Charging the Batteries

You also need a charger for these batteries. Unfortunately, you have to charge them one at a time, so if you want 4 batteries like I have, you either want a multi-battery charger or you have to be a little patient.

Knowing when to Charge

It’s a good idea to get low-voltage indicators: https://www.amazon.com/gp/product/B003Y6E6IE/ref=od_aui_detailpages01?ie=UTF8&psc=1. If you put these on your batteries when you use them, they make a rather annoying sound when the voltage drops to the low threshold. This is important because your mower’s meter isn’t going to tell you when your charge is low. If you push a li-po battery too much, you can cause damage to the battery or it could explode. These things are loud enough that I can hear them while running the lawn mower.

Safely Storing and Transporting

Li-Pos are very flammable and difficult to put out. It is advised to buy a (relatively cheap) fireproof bag for storage and charging and that you charge using the “Storage” setting when you aren’t going to use them for a week or more. You should store them at room temperatures and it is advised you are present while charging due to the fire hazards. The fireproof bag I purchased is here: https://www.amazon.com/gp/product/B01H4QCZ4G/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

Conclusion

The mower now holds a charge that is easily long enough to mow my entire lawn again. Li-Po batteries are supposed to last for 200-300 charges in good conditions, so I’m hoping to get several years out of this setup. Charging is a bit of a pain, but I tend to mow on the weekends, so I’m usually around long enough to charge all 4 of them (It takes a couple of hours to fully charge each battery).

The mower has enough power to mow at most of the height settings, but it struggles to mow at the lowest levels. This is fairly consistent with how the lead acid batteries performed as well – there just isn’t enough output to chew up thick grass that is significantly lower than the current height.

It turned out that doing this conversion was fairly easy, but not particularly cheap. All told, I bought the following:

  • 2 sets of 2 x 22.2v 4500mAh, 45C batteries – $110 each set ($220 total)
  • 2 battery low-voltage indicators – $5 each ($10 total)
  • Li-Po battery charger/balancer – $55
  • Fireproof bag (holds 4 batteries, came with 2 more low-voltage indicators) – $15

Together, that’s $300, which could buy a decent gas mower. However, I’m a nerd so I enjoyed the project.

Resources:

Configuring a Fraud Detection Whitelist on Office 365 / Exchange 365

I’ve been setup with Office 365 for around a year, and I’m still discovering little things to tweak and optimize. One such thing I ran across today was a little message in some emails that were generated by an on-premises web server:

This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing

While the link provided by Microsoft about spoofing describes spoofing in detail, it doesn’t say anything about when you know something isn’t fraudulent and want to prevent it from flagging Exchange. After doing a little digging (I started by looking for some kind of a whitelist or whitelisting certain IP addresses on Exchange/Office 365), I came across a very helpful article: This Sender Failed Our Fraud Detection Checks and May Not Be Who They Appear to Be.

In a nutshell, the problem is that the email headers specify an origination IP address (our web server’s address) that isn’t allowed as part of Exchange’s SPF (Sender Policy Framework) configuration. The SPF configuration will examine an email’s domain and ensure that the domain matches an allowed list, to prevent fraudulent sending. After all, it’s pretty damn easy to spoof an email address using software.

SPF filters are added to your DNS records, and are pretty easy to update. To that end, I logged into my DNS provider and took a look at the records for my domain. There, I found a TXT record that was setup when I initially configured Office 365. This record had the following value:

v=spf1 include:spf.protection.outlook.com -all

In order to add my web server’s address to this record and thus resolve my issue, the line simply needed to be modified as such:

v=spf1 ip4:xxx.xxx.xxx.xxx include:spf.protection.outlook.com -all

xxx.xxx.xxx.xxx is, of course, the IP address you want to “whitelist.”

Once the old DNS record expires (I have a TTL of 1 hour on this record), the new configuration should take effect and your messages will no longer be destined for your Junk Email folder.

EntityFramework – Grouping by Date Ranges

If you’ve ever created an outstanding balance report or other report that deals with aggregating data into date ranges, you’ll know that it isn’t immediately obvious how to structure your query, whether using SQL or LINQ (at least, it wasn’t to me).

My initial thought was to run multiple queries (one for each time range) and munge the results together. However, an elegant solution is to use SQL’s CASE expression to group date ranges together.

Let’s say you wanted a report that summed the amount of unpaid invoices in 20 day groupings (0-19 days past due, 20-39 past due, 40+). You could write something like this:

SELECT DaysSinceDueRange, SUM(Amount)
FROM (SELECT CASE WHEN GETDATE() - DueDate < 20 THEN 0
		  WHEN GETDATE() - DueDate BETWEEN 20 AND 39 THEN 20
		  WHEN GETDATE() - DueDate > 39 THEN 40
             END AS DaysSinceDueRange,
             Amount
       FROM Invoices
       WHERE Unpaid=1) inv
GROUP BY DaysSinceDueRange

This is really elegant, but then the question becomes how to do this with an ORM like EntityFramework. There are a couple of tricks required here:

  1. To do the date comparisons, EntityFramework requires the usage of System.Data.Entity.DbFunctions.DiffDays method (in EF 6 – it used to be in System.Data.Objects.EntityFunctions). If you try to do something like (DateTime.Now – invoice.DueDate).TotalDays, you’ll get an exception “DbArithmeticExpression arguments must have a numeric common type” because the subtraction operator is not defined for Dates in SqlServer.
  2. To do CASE / WHEN / THEN / END in EntityFramework, you have to make use of a lot of ternary operators. It can be kind of ugly, but if you write your code well enough, it should be fairly readable (or at least as readable as the SQL expression).

Here is an example of the SQL above translated into LINQ:

Context.Set<Invoice>()
       .Where(inv => inv.Unpaid)
       .Select(inv => new
       {
           DaysSinceDueRange = DbFunctions.DiffDays(inv.DueDate, DateTime.Today) < 20 ? 0 :
                               DbFunctions.DiffDays(inv.DueDate, DateTime.Today) >= 20 && 
                                   DbFunctions.DiffDays(inv.DueDate, DateTime.Today) < 40 ? 20 : 
                               40,
           Amount = inv.Amount
       }).GroupBy(inv => inv.DaysSinceDueRange)
       .Select(g => new
       {
           DaysSinceDueRange = g.Key.DaysSinceDueRange,
           Amount = g.Sum(inv => inv.Amount)
       });

Of course, you can get more complicated in a hurry, but I think this is a pretty elegant way to handle grouping data by date ranges.

Project Fi is the way to go if you are a low cellular data user

After years of being an AT&T mobile customer, dating back to the Cingular days, I finally made the jump to Google’s Project Fi last December. All in all, the service has been very good, and the savings have been ridiculous. AT&T and Verizon have recently rolled out unlimited data plans, so the pricing is a little different than the plan I was on, but it’s not too dissimilar from what I had. Note that I do not recommend project Fi if you’re a really heavy cellular data user (> 6GB total) because they charge $10/GB. If you’re using a lot of data, that will add up fast. However, the problem with most plans is they don’t give you anything if you don’t use that data (maybe you get rollover data, but I’d rather have money). Project Fi pays you back for what you don’t use.

With AT&T, My wife, my sister, and I had a family share plan with 6GB of data. The total for this plan was right around $210/mo, and that was with a corporate discount applied. All of us had new-ish smartphones (my wife and sister had iPhones, myself an android), and were coming up on the end of our 2-year contracts.

During my attendance of That Conference last year, I heard someone tell me about Project Fi and how little it cost. I started looking into it and when my phone (LG G3) turned itself into an unbootable brick one day in September, I decided to buy a Nexus 6P as a replacement. Not only was the phone reasonably priced, but was one of the very limited selection of phones that work on Project Fi. It took a little convincing of my wife to move from her iPhone to Android, but when they announced the Pixel, she agreed to make the move.

Cell Coverage and Quality

A few months in, I can tell you the service, at least where I live in Madison, Wisconsin, has been very good. I haven’t had many instances where I couldn’t get a signal. From the Project Fi FAQ:

Project Fi has partnered with Sprint, T-Mobile, and U.S. Cellular, three of the leading carriers in the US, to provide our service.

They also provide a link to a coverage map: https://fi.google.com/coverage

Project Fi also tries to utilize WI-FI calling when there is low cell quality. I have found this to be a bit of a mixed bag – I sometimes don’t get a dialtone or the phone doesn’t indicate that a call is going out until, suddenly, someone picks up.

The only places where I’ve noticed signal quality problems so far have been inside airports. In particular, it was difficult to get a signal at O’hare. Other people I was travelling with who had Verizon received a better signal.

I have also noticed that sometimes SMS messages won’t come through unless I enable cellular data (and yes, I have verified this even when messages are not MMS). My wife hasn’t had the same problem on her pixel, so mine could be a hardware issue or something specifically related to the Nexus 6P.

Costs

Getting back to the costs, our bill comes in at around $45/month. For two people. Previously, it was costing about $140/month for two people. That’s almost $100/mo, we’re saving. Because Fi reimburses you for unused data, it incentivizes us to use less than the 2GB we pay for. I even have a little widget on my phone that shows how much data I use, and it really encourages me to think about how I’m using data.

Here is a breakdown of the charges from my last bill:

Last month’s usage (for Feb 2 – Mar 2)
Unused data Credit for 1.666 GB at $10/GB -$16.65
Next month’s charges (for Mar 2 – Apr 2)
Fi Basics 2 people, $20 + $15/member $35.00
Prepaid data 2 GB at $10/GB $20.00
Taxes & regulatory fees $6.12

Total: $44.47

International Calling

One of the other great benefits of Project Fi is the international calling aspect of the plan. Data is still $10/GB in 135 countries. From their FAQ:

Project Fi offers high speed data in over 135+ countries and destinations for the same $10/GB you pay in the U.S. For a complete breakdown of specific countries please check our International Rates.

Further:

Unlimited international texts are included in your plan. If you’re using cell coverage, calls cost 20¢ per minute. If you’re calling over Wi-Fi, per-minute costs vary based on which country you’re calling and you’re charged only for outbound calls. Please check our international rates for more information.

I haven’t had a chance to try it out, but I love this part of the plan. When I traveled to Belize last year, we paid $20 for a SIM card and calls there are generally very expensive. Data is incredibly expensive. Had I had Project Fi at the time, those charges would have been very minimal (coverage is another issue altogether, but at least when you have coverage, usage comes at a reasonable rate).

Summary

So, in summary, if you’re a relatively low cellular data user and don’t mind having Google phones, this plan is a great value. I’m looking at saving nearly $1200 this year because of it. I can think of a lot better things to do with my money than spend it on cellular service.

EntityFramework Performance and IEnumerable vs IQueryable

Working in the .Net world, you get pretty used to dealing with IEnumerable collections. However, you have to be aware of performance issues that can arise when using them with EntityFramework. Sometimes I forget about IQueryable because LINQ to Entities obfuscates much of the difference of retrieving objects from a database vs dealing with an in-memory collection and IQueryable is pretty specific to dealing with querying a database.

When using the Repository pattern, one of the things I love to do is add a flexible “Find” method to the repository. Below is an example:

public partial class Order
{
    public int ID { get; set; }
    public string CustomerItemNumber { get; set; }
    public DateTime? ShipDate { get; set; }
    public int OrderTypeID { get; set; }
}

public class FindOrdersReqeust
{
    public IEnumerable<int> OrderIDs { get;set; }
    public IEnumerable<string> CustomerOrderNumbers { get; set; }
    public IEnumerable<int> OrderTypeIDs { get; set; }
    public bool? HasShipDate { get; set; }
    public DateTime? ShipDateBefore { get; set; }
    public DateTime? ShipDateAfter { get; set; }
    
    public FindOrdersReqeust()
    {
        OrderIDs = new List<int>();
        CustomerOrderNumbers = new List<string>();
        OrderTypeIDs = new List<int>();
    }
}

//I would normally implement an interface here, but for the sake of brevity am excluding from this example
public class OrderRepository
{
    private IDbContext context;
    public OrderRepository(IDbContext context)
    {
        this.context = context;
    }

    public IEnumerable<Order> Find(FindOrdersRequest request)
    {
        IEnumerable<Order> orders = context.Set<Order>().AsEnumerable();
        if(request.OrderIDs.Any())
        {
            orders = orders.Where(o => request.OrderIDs.Contains(o.ID));
        }
        if(request.CustomerOrderNumbers.Any())
        {
            orders = orders.Where(o => request.CustomerOrderNumbers.Any(x => o.CustomerOrderNumber.Equals(x)));
        }
        if(request.OrderTypeIDs.Any())
        {
            orders = orders.Where(o => request.OrderTypeIDs.Contains(o.OrderTypeID));
        } 
        if(request.ShipDateAfter.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate >= request.ShipDateAfter);
        }
        if (request.ShipDateBefore.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue && o.ShipDate <= request.ShipDateBefore);
        }
        if(request.HasShipDate.HasValue)
        {
            orders = orders.Where(o => o.ShipDate.HasValue == reqeust.HasShipDate.Value);
        }

        return orders;
    }
}

public class OrderService
{
    private OrderRepository orderRepository;
    public OrderService(OrderRepository orderRepository)
    {
        this.orderRepository = orderRepository;
    }

    public void GetUnshippedOrders()
    {
        return orderRepository.Find(new FindOrdersRequest()
        {
            HasShipDate = false
        }).ToList();
    }
}

The major problem with this code is highlighted in the example above – namely that AsEnumerable() call on the context.Set<T> will enumerate every row from the database in full. The Sql generated will be equivalent to a SELECT *, and will probably look something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate]
    FROM [dbo].[Orders] AS [Extent1]

Now, you might not notice if you have 10 rows, but if you have 1,000,000, you’ll notice as your application burns to the ground and consumes all the memory on whatever server it’s running on.

So, an easy fix is to change that one line to IQueryable / AsQueryable() like so:

IQueryable<Order> orders = context.Set<Order>().AsQueryable();

Now we get the benefits of deferred execution until ToList is called on the results of the Find method from OrderRespository. The SQL generated will now be something like:

SELECT [Extent1].[ID] AS [ID], 
    [Extent1].[CustomerOrderNumber] AS [CustomerOrderNumber], 
    [Extent1].[ShipDate] AS [ShipDate],
    [Extent1].[OrderTypeID] AS [OrderTypeID]
    FROM [dbo].[Orders] AS [Extent1]
    WHERE [Extent1].[ShipDate] IS NULL

This is a huge improvement already, but it can be much better than this. In the OrderRepository class, if we also make the return type IQueryable, we can then further query the database before pulling the results into memory.

public IQueryable<Orders> Find(FindOrdersRequest request)
{
    //the rest of the method remains the same
}

This distinction is important, and I will provide an example.

After I add this “Find” functionality to my repositories, I tend to build reports using those methods, which frequently utilize .GroupBy() after filtering. If we were to leave the Find method as returning an IEnumerable<Order> collection, we would find that the SQL generated would not be what we wanted.

For example, let’s say I now wanted a report that showed the number of shipped and unshipped orders by OrderTypeID. I would add a method to the OrderService as such:

//assume that an object ReportItem exists with properties as defined below
public IEnumerable<ReportItem> GetUnshippedOrdersByTypeReport()
{
    var report = new List<ReportItem>();
    var results = orderRepository.Find(new FindOrdersRequest(){ HasShipDate = false })
                                 .GroupBy(order => new 
                                 {
                                     OrderTypeID: order.OrderTypeID
                                 })
                                 .Select(g => new
                                 {
                                     OrderTypeID: g.Key.ID,
                                     ShippedOrderCount: g.Sum(x => x.ShipDate.HasValue),
                                     UnshippedOrderCount: g.Sum(x => !x.ShipDate.HasValue)   
                                 });
    foreach(var result in results)
    {
        report.Add(new ReportItem()
        {
            OrderTypeID: result.OrderTypeID,
            ShippedOrderCount: result.ShippedOrderCount,
            UnshippedOrderCount: result.UnshippedOrderCount,
        });
    }
   
    return report;
}

With this code, the benefits of using IQueryable in the repository are clear over IEnumerable.

If we leave the repository with IEnumerable, the SQL generated will be done in two phases:

  1. The filtering of the “FindOrdersRequest” will be executed as the SQL statement above and the results will be stored into a temporary IEnumerable collection
  2. The Group By operation will operate on this temporary collection. More trips to the database will be taken if any navigation properties are referenced in the GroupBy (there are none in this example)

If we change the repository to use IQueryable, the SQL generated will be done in a single, neat statement. It will filter and perform the group by at once, resulting in much better performance. Another performance benefit is that we are projecting specific columns and not populating an entire Order object with every field. For this trivial example, it doesn’t make much of a difference, but if you’re dealing with tables/objects that have many columns/properties, you will notice. If you’re deploying to a cloud-based environment, you know that compute time and efficiency matter a lot, so following best practices for performance will help you out a lot in that respect.

 

How to Add and Manage Outlook Rules/Filters for Office 365 Shared Mailboxes

It isn’t obvious, but you can setup and manage rules for shared mailboxes in Office 365 just as you do for users’ mailboxes. It isn’t obvious how to it is because you can’t administer these rules through the desktop client (or any other client) like you can with user mailboxes, and the settings for managing these rules doesn’t even appear to be available from the Office 365 or Exchange administration panel. Here are the steps to take:

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Enter the following Url into your browser to get to the shared mailbox options page: https://outlook.office365.com/ecp/<email address>, where <email address> is the email address of the shared mailbox with the rules you want to manage.
    • For example, if your email address was info@contoso.com, you would enter the address https://outlook.office365.com/ecp/info@contoso.com
  • Click the “Organize email” section on the left menu

This method also works for editing user mailbox rules, provided you have access.

Note: As of this writing, Internet Explorer is the only browser I have tried that successfully adds a rule that involves a sent from or sent to criteria. Chrome and Firefox both give a CORS error because selecting people tries to open the contacts for the account, and that application is part of a different domain. The message I receive from Firefox is:

Load denied by 08:59:07.114 Load denied by X-Frame-Options: https://outlook.office.com/owa/#viewmodel=OwaOptionRichPeoplePickerViewModelFactory does not permit cross-origin framing.

Managing Global Rules

If you simply want to edit global rules that affects all mail flowing to/from your organization, you can follow the steps below:

Getting to the Exchange Admin Center

  • Login to Office 365 with an account that has administrative access to the Shared Mailbox
  • Open the Admin Center by clicking the “Admin” button
  • When the Admin Center opens, click the “Admin Centers” link on the left-side menu and choose “Exchange”

Managing Rules

  • From the Exchange Admin Center, there is a “Rules” link under the mail flow section. Remember, this area only allows you perform a limited set of actions, that does NOT include moving a message to a specific folder, as these rules are at a global level. The actions you can perform here are:
    • Forward the message for approval…
    • Redirect the message to…
    • Block the message…
    • Add recipients…
    • Apply a disclaimer to the message…
    • Modify the message properties…
    • Modify the message security…
    • Prepend the subject of the message with…
    • Generate incident report and send it to…
    • Notify the recipient with a message…