Enabling TLS 1.2 on IIS 7.5 and Discovering a Great Tool (IIS Crypto) Along the Way

TLDR; summary of the fastest way to enable TLS 1.2 on IIS 7.5:

  • Download IIS Crypto at https://www.nartac.com/Products/IISCrypto/
  • Run the executable on your server
  • On the user interface, click the “Best Practices” button (located at bottom left)
  • Click “Apply” (located at bottom right)
  • Reboot Server

The full details:

Today I was contacted by a third-party company that exchanges data with mine and they informed me that they were requiring TLS 1.2 connections as of the new year. Reviewing information about my server’s crypto configuration, I found that, indeed, TLS 1.1 and TLS 1.2 were not enabled.

In setting out to resolve the problem, I ran across a couple of posts that talked about updating registry keys and doing some other messy stuff. And then, I found this post on ServerFault about an awesome tool called IIS Crypto.

From the IIS Crypto website:

IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website

Not only is the tool free, it doesn’t even install anything on your machine.

After downloading and running, I looked over the list of available protocols, ciphers, etc. They provide a “Best Practices” button which enables only the protocols, ciphers, etc. that should be enabled using, well, current best practices. This is another awesome feature because the list of everything to review is fairly extensive and not having to do the research myself on these is a huge time saver.

On the program’s menu is a “Site Scanner” tool that will open up a browser and analyze your site. You can use this without running the application. The URL is:

https://www.ssllabs.com/ssltest/analyze.html?d=<your site>&hideResults=on (where <yoursite> is the website you want to analyze)

The analyzer checks your certificate(s), available protocols, and cipher suites, performs handshake simulations with a bevy of operating system / user-agent combinations (well over 50), and analyzes against various attacks. When I first ran the test, the results weren’t so great – there were a number of problems related to my crypto settings.

After reviewing the analyzer, I applied the “Best Practices” settings and restarted the server. Once the server booted back up everything was working and I passed the scanner with flying colors.

For reference, I was working with IIS 7.5 running on Windows Server 2008 R2.

Migrating Azure Resources from one Subscription to Another

I love Azure. It’s a great platform and I’m very happy with the continuing evolution of products and services offered. If you ever have to move resources to a different subscription, there are a lot of little things you have to think about, because sometimes settings are tied to a particular subscription or resource group (which is tied to a subscription).

Some of the non-profit organizations I’ve built applications for have taken advantage of Microsoft’s donation offerings, where they receive Microsoft products and services at a heavily discounted rate. However, these subscriptions often come with a time limit, after which they must be purchased again. When that happens, a new Azure subscription is created and you have to reassign any resources that are under the old subscription to the new one.

The easy part is actually reassigning the subscriptions. There are two ways I see to do this:

  1. Create a new resource group, assigning it to the new subscription ID, and then assign all of the resources you would like to that group
  2. Move the existing resource group to a new subscription. This works better in cases where you have resource groups well-defined
    1. Moving a resource group consists of choosing the resource group in the Azure portal and clicking the “Move,” as shown below:

The trickier part is figuring out any resources that may have been tied to the old resource group name or subscription. Here are a couple I have found:

  • SendGrid (and likely other external/third party applications that can’t use Azure credits) cannot be migrated from one subscription to another. A new API key must be generated for the application(s) of use.
  • Lets Encrypt certificates generated using the extension http://www.siteextensions.net/packages/letsencrypt (detailed in the post http://gagetrader.info/2016/09/27/lets-encrypt-azure-win/) have a couple of keys that are tied to the subscription Id and the resource group. The ones that need to be edited are (to view keys select the resource where the web job was registered from Azure portal -> Application Settings -> Keys section):
    • letsencrypt:SubscriptionId – 7dbf7306-25b3-4e5a-a85a-44017efb9cc5
    • letsencrypt:ResourceGroupName: (New Resource Group Name, if applicable)

    After you have completed this step, you will find that the web job fails with the following message the next time it runs:

    Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.Rest.Azure.CloudException: The client ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ with object id ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ does not have authorization to perform action ‘Microsoft.Web/sites/config/list/action’ over scope ‘/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Web/sites/MySite/config/publishingcredentials’. at Microsoft.Azure.Management.WebSites.WebAppsOperations.

    This long message basically means that we need to assign the let’s encrypt principal created during the configuration of the extension with the Contributor role. This is fairly straightforward:

    1. Make sure Azure Powershell is installed on your machine (Open the Microsoft Web Platform Installer and find Microsoft Azure Powershell on the list if you don’t have it)
    2. Open Powershell as an administrator and sign in using the command:

      Login-AzureRmAccount

    3. Make sure your new Azure Subscription Id is selected. If not, run the following command:

      Select-AzureRmSubscription -SubscriptionId Your-Subscription-Id-Guid-Here

    4. Run the following command to assign the correct permissions to your new subscription Id:

      New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName Your-Service-Principal-Name-From-Extension-Setup

    Once that has been completed, the job should run again and be successful. Now your SSL certificates will continue to auto-renew.

Office 365 – Set Out of Office Response and Forward all User Emails as Exchange Admin

I recently had a member of my organization go on maternity leave, and, because of the way that babies work, she wasn’t able to set an out of office response or a forward.

Thankfully, with Exchange Admin Center, it’s pretty easy to do both.

Setting a User’s Out of Office Response

The first thing is to essentially impersonate a user as the Office Admin:

  1. Navigate to Exchange Admin Center on office.com (you have to be an Administrator to do this, obviously)
  2. Click your Name/Icon in the very upper right-hand corner and choose “Another user…”
  3. Choose the user in your organization that you need to update from the popup window that follows
  4. On the right sidebar of the window that opens for that user, you should see “shortcuts to other things you can do”
    1. In that list is “Set up an automatic reply message”

That’s it. Once you’ve chosen to Set up an automatic reply message, you can format the message however you want (one for internal users and one for external users)

Forwarding a User’s emails

  1. Log into Office 365 Admin Center
  2. Choose Users -> Active Users
  3. Find the User whose email you want to forward and click their name
  4. In the sidebar that opens, expand “Mail Settings” and click “Edit” next to Email Forwarding
  5. Flip forwarding to On and enter the email address where the messages forward
    1. You can optionally choose to keep a copy of the email in the inbox for the original user

Pretty straightforward, but not something I do all that frequently, so I know I’ll forget by the next time I have to do it 🙂

How to Map the Windows Desktop to a Different Folder

I recently setup a computer for a user who likes to store a lot of files on their desktop. However, this is bad if those files aren’t being backed up. I’m a big proponent of using OneDrive, especially since my organization uses Office 365 and they give you 1TB of storage for free for each user license you purchase.

I suggested the user move their desktop files to OneDrive to give them a backup and access from other places, and they mentioned how they are more comfortable with the desktop because they get used to where certain folders and files are organized visually.

It turns out that you can map any folder to the desktop, and it’s easy:

  1. Open Windows Explorer (Win + e)
  2. Right-Click Desktop and choose “Properties”
  3. Click the “Location” tab
  4. Type the location of the directory you want to be the “Desktop” (or click “Move” and browse to the folder)
  5. Click OK

That’s it.

For the user I mentioned above, I created a folder called “Desktop” in their OneDrive for Business folder and mapped the Desktop to that location. Now there are backups and they can use the desktop as they always have.

H/T to this post for the knowledge: Can you change the location of the Desktop folder in Windows?

Working with Foreign Keys when using EntityFramework Code First Migrations

One issue I’ve come across lately while working with Entity Framework Migrations has to do with foreign key relationships. If you’ve ever done any reconfiguration of your schema, you know you probably need to update your migration files to get all the data loaded correctly. Let’s take a simple example:

Let’s say you have an Order model for all of your orders defined as such:

public class Order
{
    public int ID { get; set; }
    public string OrderNumber { get; set; }
    public DateTime? ShipDate { get; set; }
}

This is obviously a very contrived example and a real order would have a lot of other information, but it works for this example.

Now let’s say you add a new ordering customer and you want to distinguish orders by Customer (probably a good thing to do!). Here is the Customer object:

public class Customer
{
    public int ID { get; set; }
    public string Name { get; set; }
    public virtual ICollection<Order> Orders { get; set; }

    public Customer()
    {
        Orders = new List<Order>();
    }
}

Now we need to modify our order by adding a CustomerID column. The result looks as you’d expect:

public class Order
{
    public int ID { get; set; }
    public string OrderNumber { get; set; }
    public DateTime? ShipDate { get; set; }
    public int CustomerID { get; set; }
    public virtual Customer Customer { get; set; }
}

You’ll need to properly set up your mapping where ever you have that defined. There are multiple ways to do this, but the approach I prefer is to have mapping files defined separate from my POCOs (Plain old C# objects, the classes defined above) and then add them in the OnModelCreating function of my Context class like this:

protected override void OnModelCreating(DbModelBuilder modelBuilder)
{    
    modelBuilder.Configurations.Add(new OrderMap());
    modelBuilder.Configurations.Add(new CompanyMap());
    //other mapping classes
}

Inside the constructor of my OrderMap class, this line of code will add the relationship between Customer and Order:

this.HasRequired(t => t.Customer)
      .WithMany(t => t.Orders)
      .HasForeignKey(d => d.CustomerID);

Now, with all that setup, if you add a migration, Entity Framework will scaffold the changes required to make all this happen.

You’ll probably want to modify a couple of things, though.

For starters, you’ll want to setup your customers and set all of your existing orders to use the Customer ID associated with the orders that already exist in the system. Also, you have to do this before adding the Foreign Key between Order and Customer because after you add the new column (it is non-nullable), all CustomerID fields will be “0” in your Orders table.

Your migration might look something like this (EF will also add some indexes, I am showing this for the sake of brevity):

public partial class AddCustomerToModels : DbMigration
{
    public override void Up()
    {
        AddColumn("dbo.Orders", "CustomerID", c => c.Int(nullable:false));
        CreateTable(
		"dbo.Customers",
		c => new
                {
                    ID = c.Int(nullable: false),
		    Name = c.String(nullable: false, maxLength: 50)
		})
                .PrimaryKey(t => t.ID);
        
	AddForeignKey("dbo.Orders", "CustomerID", "dbo.Customers", "ID");
    }

    public override void Down()
    {
        //code to reverse the migration goes here
    }
}

You’ll need to add code to update those customers. I usually write a line of Sql like the following, and place it after the create table but before the AddForeignKey call:

Sql(@"INSERT INTO Customers (ID, Name) VALUES (1, 'Acme');
INSERT INTO Customers (ID, Name)
VALUES (2, 'Evil Corp')

UPDATE Orders SET CustomerID = 1");

This will then get everything in your database ready to add that foreign key constraint. Of course, if you don’t like hardcoding company information into your migrations (not a great practice, really), you can do this after the fact, but sometimes you already have all the data in your database and just need to move it around due to a schema change. This is, again, a bit of a contrived example.

Now, to the tricky part I really want to highlight: you have to be really consistent in the way you add foreign keys.

For example, these two lines are slightly different – the first one uses the schema name in both the dependent and principal tables, while the second only does in for the dependent table:

AddForeignKey("dbo.Orders", "CustomerID", "dbo.Customers", "ID");

AddForeignKey("dbo.Orders", "CustomerID", "Customers", "ID");

The foreign key names they generate are as follows:

[FK_dbo.Orders_dbo.Customers_CustomerID]

[FK_dbo.Orders_Customers_CustomerID]

If you later try to drop a foreign key and don’t use the same exact format as you did when setting it up, you will encounter errors – usually something like:

The object ‘FK_dbo.Orders_dbo.Customers_CustomerID’ is dependent on column ‘CustomerID’.
ALTER TABLE DROP COLUMN CustomerID failed because one or more objects access this column.

So the moral here is to be very consistent with your foreign key naming scheme. If you’ve got an old database that you’ve added code first to after the fact, you’ll probably have a lot of relationships that don’t use the schema name in the key name, so you’ll run into this frequently if you’re modifying your schema.

How to bind multiple SSL Certificates to port 443 for different domains/subdomains on the same IP address on an IIS Server

The title is a bit of a mouthful, but I’ve recently encountered a situation where I had multiple SSL certificates I wanted bound to two different domains being hosted on the same server with the same IP address.

Using IIS’s Internet Information Services Manager UI application in IIS 7 (not sure if this applies to newer versions), you can assign a binding for an SSL certificate to port 443, but you can only enter an IP address and not the host-header information:

In order to accomplish this, you have to use command line tools. Below is a great resource I found that helped me solve this problem. I’ll pull out the most relevant command:

https://blogs.iis.net/thomad/ssl-certificates-on-sites-with-host-headers

appcmd set site /site.name:”MySubDomainSite” /+bindings.[protocol=’https’,bindingInformation=’*:443:mysubdomain.mysite.com’]

In this example, “MySubDomainSite” is the site you have defined in IIS for the subdomain (or domain) where you are trying to assign the second certificate.

Powershell Script: Rename Files with Characters that are Invalid for OneDrive for Business and Sharepoint Online

I’m a big fan of Microsoft’s Office 365 offerings, and one of my favorite components of that is the OneDrive storage that comes with each user. It’s a great way to make sure files are always backed up and available anywhere you have an internet connection.

One of the challenges you can encounter when using OneDrive for Business is that not all characters that are valid for filenames in a Windows environment are valid for use in OneDrive (and Sharepoint Online files for that matter). According to this article, the list of invalid characters are: \ / : * ? ” < > | # %.

There are several other restrictions (such as file size and name length), but the invalid characters issue is the one I encounter the most frequently, especially when I migrate a user who has a lot of files to OneDrive for Business. In one case, there were over 1,000 files with invalid characters, and I wasn’t about to rename those by hand.

Looking online, there are several great resources for Powershell scripts to solve this problem, although some were out of date and did not accurately reflect the current restrictions of the files files. In the end, there were two sources I liked a lot and I combined them into a single script and modified that. It’s not a perfect or complete script, and I don’t consider myself even proficient with Powershell. However, this script has worked well for me.

First, the sources:

Use PowerShell to check for illegal characters before uploading multiple files into SharePoint

Fix file names for Skydrive Pro syncing

I liked that the second one outputs to a TSV file that you can open with Excel and review. The first source just prints to the console, and when you have a ton of files, that isn’t very useful. Because of the output to the file, I was able to tweak things a little more, adding in a few additional things, such as searching for files with “%20” in the name (html encoded space).

function Check-IllegalCharacters ($Path, $OutputFile, [switch]$Fix, [switch]$Verbose)
{
    #The maximum allowed number of characters of a file's full path + name
    $maxCharacters = 400
    #The maximum file size
    $maxFileSize = 2147483648
    #A list of file types that can't be sync'd
    $invalidFileTypes = ".tmp", ".ds_store"
    #A list of file names that can't be sync'd
    $invalidFileNames = "desktop.ini", "thumbs.db", "ehthumbs.db"

    Write-Host Checking files in $Path, please wait...

    #Only run for a valid path
    if (!(test-path $path))
    {
        'Invalid path for file renames'
    }
    else
    {
        #if the output file exists empty it first
        if (test-path $outputFile)
        {
            clear-content $outputFile
        }
        #add headers to output file
        Add-Content $outputFile "File/Folder Name	New Name	Comments";

        #Get all files and folders under the path specified
        $items = Get-ChildItem -Path $Path -Recurse
        foreach ($item in $items)
        {
            #Keep a flag to indicate whether or not we can perform the updates (some problems are deal breakers)
            $valid = $true
            #Keep an array list for comments
            $comments = New-Object System.Collections.ArrayList
            
            if ($item.PSIsContainer) { $type = "Folder" }
            else { $type = "File" }
        
            #Check if item name is longer than the max characters in length
            if ($item.Name.Length -gt $maxCharacters)
            {
                [void]$comments.Add("$($type) $($item.Name) is $($item.Name.Length) characters (max is $($maxCharacters)) and will need to be truncated")
                $valid = $false
            }

            if($item.Length -gt $maxFileSize)
            {
                [void]$comments.Add("$($type) $($item.Name) is $($item.Length / 1MB) MB (max is $($maxFileSize / 1MB)) and cannot be synchronized.")   
                $valid = $false
            }

            if($invalidFileNames.Contains($item.Name))
            {
                [void]$comments.Add("$($type) $($item.Name) is not a valid filename for file sync.")
                $valid = $false
            }

            if($invalidFileTypes.Contains($item.Name.Substring($item.Name.Length-4)))
            {
                [void]$comments.Add("$($type) $($item.Name) type $($item.Name.Substring($item.Name.Length-4)) is not a valid file type for file sync.")
                $valid = $false
            }
           
            #Technically all of the following are illegal \ / : * ? " < > | # %
            #However, all but the last two are already invalid Windows Filename characters, so we don't have to worry about them
            $illegalChars = '[#%]'
            filter Matches($illegalChars)
            {
                $item.Name | Select-String -AllMatches $illegalChars |
                Select-Object -ExpandProperty Matches
                Select-Object -ExpandProperty Values
            }
            
            #Replace illegal characters with legal characters where found
            $newFileName = $item.Name
            Matches $illegalChars | ForEach-Object {
                if($Verbose){ [void]$comments.Add("Illegal string '$($_.Value)' found") }
                #These characters may be used on the file system but not SharePoint
                if ($_.Value -match "#") { $newFileName = ($newFileName -replace "#", "-") }
                if ($_.Value -match "%20") { $newFileName = ($newFileName -replace "%20", " ") }
                if ($_.Value -match "%") { $newFileName = ($newFileName -replace "%", "-") }
            }

            if($comments.Count -gt 0)
            {
                #output the details
                Add-Content $outputFile "$($item.FullName)	$($item.FullName -replace $([regex]::escape($item.Name)), $($newFileName))	$($comments -join ', ')"    
                if($Verbose)
                { 
                    Write-Host $($type) $($item.FullName): $($comments -join ', ') -ForegroundColor Red
                }
            }
                
            #Fix file and folder names if found and the Fix switch is specified
            if ($newFileName -ne $item.Name)
            {
                if($fix -and $valid)
                {
                    Rename-Item $item.FullName -NewName ($newFileName)
                    if($Verbose)
                    {
                        Write-Host $($type) $($item.Name) has been changed to $($newFileName) -ForegroundColor Yellow
                    }
                }
            }
        }
    }
    Write-Host "Done"
}

#Example: Check-IllegalCharacters -Path 'C:\Users\User\Downloads\Files With Errors' -OutputFile 'C:\Users\User\Desktop\RenamedFiles.tsv' -Verbose -Fix

I noticed that I ran into some errors when files were deeply nested – for example if you have to rename a file in a folder that also was renamed. Re-running the script a few times fixed that problem for me (each time it would fix one more layer of folders – so if you had a file inside 3 levels of folders that also needed renaming, it would take four passes in order to complete everything.

Quick Tip: How to get a text list of files in a Windows Directory

Every once in a while I need to get a list of just the file names in a directory. Sometimes there are a LOT of files and it would be a pain in the ass to type them out or write a program to manipulate them in some way.

The quick way to do this in Windows (images below are for Windows 10) is as follows:

  1. Open a windows explorer window (shortcut: hit Win + e, where “Win” is the Windows key) and navigate to the folder containing the files you want to print out
  2. Hold the Shift key and right-click in the folder (don’t click on any actual files) and choose the option “Open Powershell Window here”

  3. In the powershell window that opens, type

    Get-ChildItem -name

That’s it! now it prints a list of file names that you can copy/paste from the powershell window into something else (Excel, notepad, word, whatever)

Alternate method: command prompt

  1. Open a command prompt (Hit the Windows key, type “cmd” and hit enter)
  2. Navigate to the folder containing the images (type cd “<your file path>”, for example: “C:\Users\Public\Public Pictures\Sample Pictures”)
  3. Type the command below:

    dir /b

  4. If you’re using Windows 10, you can simply copy/paste from the command prompt. With an older version of windows, you need to:
    1. Right click in the command window and choose “Select All”
    2. Hit the Enter key. This will copy the contents of the command prompt to your paste buffer so you can paste using ctrl + v or right-click -> Paste

 

Connecting to Synology DiskStation from Windows 10

I’ve owned a small Synology Diskstation for a few years and really love its features and capabilities, especially considering its cost. One of the primary roles of my DiskStation is to backup my home computers. After recently purchasing a Surface Pro 4 and applying the Creators’ update, I was having trouble connecting to my DiskStation running DiskStation 6.1. After reading quite a few posts online about different problems, it seems the solution I needed was really quite basic.

What was most curious about this problem was that I could not see my DiskStation appear under “Network” under Windows explorer, and I received error code 53 (system error 53 has occurred. The network path was not found) when I tried to map the network drive using the command prompt like so:

net use T: \\DiskStation

Performing nbtstat -c from the command line and net view both listed my DiskStation with the UNC I was expecting, and the correct IP (I have mine configured as a static) in the case of the nbtstat command.

First, I made sure the SMB settings on the DiskStation were set to allow from SMB 1.0 to SMB 3.0 (DiskStation Control Panel -> File Services -> SMB -> Advanced Settings -> Maximum/Minimum SMB Protocol Settings).

Then, in Windows, if I opened Explorer and navigated to the IP address or the UNC sharename (\\DiskStation, for example), it would prompt me for a password. This was the primary point of failure for me earlier – I had forgotten that when logging into another server, whether it’s a Synology DiskStation or a Windows Server, you have to provide the server name AND the account name (or domain name/Account name, in the even that you’re connected to a domain).

So the Username wasn’t just “MyUsername” it was “DiskStation\MyUserName”. Once I did that, my DiskStation appeared under Network.

Killing all connections to a SQL Server Database

One issue I’ve run across frequently during development is restoring a database to a newer state. Often, when I want perform the restore, there are active connections to my development database, so restoring will fail.

Of course, StackOverflow had the answer to this, but I’ve searched for the solution to this problem enough times where it made sense for me to finally write it down.

Script to kill all connections to a database (More than RESTRICTED_USER ROLLBACK)

User AlexK posted this excellent solution:

For MS SQL Server 2012 and above

USE [master];

DECLARE @kill varchar(8000) = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), session_id) + ';'  
FROM sys.dm_exec_sessions
WHERE database_id  = db_id('MyDB')

EXEC(@kill);

For MS SQL Server 2000, 2005, 2008

USE master;

DECLARE @kill varchar(8000); SET @kill = '';  
SELECT @kill = @kill + 'kill ' + CONVERT(varchar(5), spid) + ';'  
FROM master..sysprocesses  
WHERE dbid = db_id('MyDB')

EXEC(@kill);