Earlier this year, I built a .Net Core Web Application and deployed it on IIS 7.5. I noticed right away that it was extremely slow on initial load. This was confusing because I tested deployment to Azure and the performance was great. I struggled to figure out why it loaded so slowly but all of my .Net Framework sites ran quickly.
I struggled to find an answer for a long time, but while reading through documentation while setting up another application I came across this bit of information:
You can make use of the preloading feature to have applications running before users connect. In your ApplicationHost.config, add the preloadEnabled attribute to the <application> element associated with the application. The application node is a child element of the sites node:
<site name="Default Web Site" id="1">
<application path="/rssbus" applicationPool="DefaultAppPool" preloadEnabled="true">
When PreloadEnabled is set to true, IIS will simulate a user request to the default page of the website or virtual directory so that the application initializes.
While it technically is a bit of a workaround since it doesn’t solve the root problem I experience with preloading, it has kept my application loading quickly since I enabled it.
TLDR; summary of the fastest way to enable TLS 1.2 on IIS 7.5:
- Download IIS Crypto at https://www.nartac.com/Products/IISCrypto/
- Run the executable on your server
- On the user interface, click the “Best Practices” button (located at bottom left)
- Click “Apply” (located at bottom right)
- Reboot Server
The full details:
Today I was contacted by a third-party company that exchanges data with mine and they informed me that they were requiring TLS 1.2 connections as of the new year. Reviewing information about my server’s crypto configuration, I found that, indeed, TLS 1.1 and TLS 1.2 were not enabled.
In setting out to resolve the problem, I ran across a couple of posts that talked about updating registry keys and doing some other messy stuff. And then, I found this post on ServerFault about an awesome tool called IIS Crypto.
From the IIS Crypto website:
IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website
Not only is the tool free, it doesn’t even install anything on your machine.
After downloading and running, I looked over the list of available protocols, ciphers, etc. They provide a “Best Practices” button which enables only the protocols, ciphers, etc. that should be enabled using, well, current best practices. This is another awesome feature because the list of everything to review is fairly extensive and not having to do the research myself on these is a huge time saver.
On the program’s menu is a “Site Scanner” tool that will open up a browser and analyze your site. You can use this without running the application. The URL is:
https://www.ssllabs.com/ssltest/analyze.html?d=<your site>&hideResults=on (where <yoursite> is the website you want to analyze)
The analyzer checks your certificate(s), available protocols, and cipher suites, performs handshake simulations with a bevy of operating system / user-agent combinations (well over 50), and analyzes against various attacks. When I first ran the test, the results weren’t so great – there were a number of problems related to my crypto settings.
After reviewing the analyzer, I applied the “Best Practices” settings and restarted the server. Once the server booted back up everything was working and I passed the scanner with flying colors.
For reference, I was working with IIS 7.5 running on Windows Server 2008 R2.
I love Azure. It’s a great platform and I’m very happy with the continuing evolution of products and services offered. If you ever have to move resources to a different subscription, there are a lot of little things you have to think about, because sometimes settings are tied to a particular subscription or resource group (which is tied to a subscription).
Some of the non-profit organizations I’ve built applications for have taken advantage of Microsoft’s donation offerings, where they receive Microsoft products and services at a heavily discounted rate. However, these subscriptions often come with a time limit, after which they must be purchased again. When that happens, a new Azure subscription is created and you have to reassign any resources that are under the old subscription to the new one.
The easy part is actually reassigning the subscriptions. There are two ways I see to do this:
- Create a new resource group, assigning it to the new subscription ID, and then assign all of the resources you would like to that group
- Move the existing resource group to a new subscription. This works better in cases where you have resource groups well-defined
- Moving a resource group consists of choosing the resource group in the Azure portal and clicking the “Move,” as shown below:
The trickier part is figuring out any resources that may have been tied to the old resource group name or subscription. Here are a couple I have found:
- SendGrid (and likely other external/third party applications that can’t use Azure credits) cannot be migrated from one subscription to another. A new API key must be generated for the application(s) of use.
- Lets Encrypt certificates generated using the extension http://www.siteextensions.net/packages/letsencrypt (detailed in the post http://gagetrader.info/2016/09/27/lets-encrypt-azure-win/) have a couple of keys that are tied to the subscription Id and the resource group. The ones that need to be edited are (to view keys select the resource where the web job was registered from Azure portal -> Application Settings -> Keys section):
- letsencrypt:SubscriptionId – 7dbf7306-25b3-4e5a-a85a-44017efb9cc5
- letsencrypt:ResourceGroupName: (New Resource Group Name, if applicable)
After you have completed this step, you will find that the web job fails with the following message the next time it runs:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.Rest.Azure.CloudException: The client ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ with object id ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ does not have authorization to perform action ‘Microsoft.Web/sites/config/list/action’ over scope ‘/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Web/sites/MySite/config/publishingcredentials’. at Microsoft.Azure.Management.WebSites.WebAppsOperations.
This long message basically means that we need to assign the let’s encrypt principal created during the configuration of the extension with the Contributor role. This is fairly straightforward:
- Make sure Azure Powershell is installed on your machine (Open the Microsoft Web Platform Installer and find Microsoft Azure Powershell on the list if you don’t have it)
- Open Powershell as an administrator and sign in using the command:
- Make sure your new Azure Subscription Id is selected. If not, run the following command:
Select-AzureRmSubscription -SubscriptionId Your-Subscription-Id-Guid-Here
- Run the following command to assign the correct permissions to your new subscription Id:
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName Your-Service-Principal-Name-From-Extension-Setup
Once that has been completed, the job should run again and be successful. Now your SSL certificates will continue to auto-renew.
I recently had a member of my organization go on maternity leave, and, because of the way that babies work, she wasn’t able to set an out of office response or a forward.
Thankfully, with Exchange Admin Center, it’s pretty easy to do both.
Setting a User’s Out of Office Response
The first thing is to essentially impersonate a user as the Office Admin:
- Navigate to Exchange Admin Center on office.com (you have to be an Administrator to do this, obviously)
- Click your Name/Icon in the very upper right-hand corner and choose “Another user…”
- Choose the user in your organization that you need to update from the popup window that follows
- On the right sidebar of the window that opens for that user, you should see “shortcuts to other things you can do”
- In that list is “Set up an automatic reply message”
That’s it. Once you’ve chosen to Set up an automatic reply message, you can format the message however you want (one for internal users and one for external users)
Forwarding a User’s emails
- Log into Office 365 Admin Center
- Choose Users -> Active Users
- Find the User whose email you want to forward and click their name
- In the sidebar that opens, expand “Mail Settings” and click “Edit” next to Email Forwarding
- Flip forwarding to On and enter the email address where the messages forward
- You can optionally choose to keep a copy of the email in the inbox for the original user
Pretty straightforward, but not something I do all that frequently, so I know I’ll forget by the next time I have to do it 🙂
I recently setup a computer for a user who likes to store a lot of files on their desktop. However, this is bad if those files aren’t being backed up. I’m a big proponent of using OneDrive, especially since my organization uses Office 365 and they give you 1TB of storage for free for each user license you purchase.
I suggested the user move their desktop files to OneDrive to give them a backup and access from other places, and they mentioned how they are more comfortable with the desktop because they get used to where certain folders and files are organized visually.
It turns out that you can map any folder to the desktop, and it’s easy:
- Open Windows Explorer (Win + e)
- Right-Click Desktop and choose “Properties”
- Click the “Location” tab
- Type the location of the directory you want to be the “Desktop” (or click “Move” and browse to the folder)
- Click OK
For the user I mentioned above, I created a folder called “Desktop” in their OneDrive for Business folder and mapped the Desktop to that location. Now there are backups and they can use the desktop as they always have.
H/T to this post for the knowledge: Can you change the location of the Desktop folder in Windows?
One issue I’ve come across lately while working with Entity Framework Migrations has to do with foreign key relationships. If you’ve ever done any reconfiguration of your schema, you know you probably need to update your migration files to get all the data loaded correctly. Let’s take a simple example:
Let’s say you have an Order model for all of your orders defined as such:
This is obviously a very contrived example and a real order would have a lot of other information, but it works for this example.
Now let’s say you add a new ordering customer and you want to distinguish orders by Customer (probably a good thing to do!). Here is the Customer object:
Now we need to modify our order by adding a CustomerID column. The result looks as you’d expect:
You’ll need to properly set up your mapping where ever you have that defined. There are multiple ways to do this, but the approach I prefer is to have mapping files defined separate from my POCOs (Plain old C# objects, the classes defined above) and then add them in the OnModelCreating function of my Context class like this:
Inside the constructor of my OrderMap class, this line of code will add the relationship between Customer and Order:
Now, with all that setup, if you add a migration, Entity Framework will scaffold the changes required to make all this happen.
You’ll probably want to modify a couple of things, though.
For starters, you’ll want to setup your customers and set all of your existing orders to use the Customer ID associated with the orders that already exist in the system. Also, you have to do this before adding the Foreign Key between Order and Customer because after you add the new column (it is non-nullable), all CustomerID fields will be “0” in your Orders table.
Your migration might look something like this (EF will also add some indexes, I am showing this for the sake of brevity):
You’ll need to add code to update those customers. I usually write a line of Sql like the following, and place it after the create table but before the AddForeignKey call:
This will then get everything in your database ready to add that foreign key constraint. Of course, if you don’t like hardcoding company information into your migrations (not a great practice, really), you can do this after the fact, but sometimes you already have all the data in your database and just need to move it around due to a schema change. This is, again, a bit of a contrived example.
Now, to the tricky part I really want to highlight: you have to be really consistent in the way you add foreign keys.
For example, these two lines are slightly different – the first one uses the schema name in both the dependent and principal tables, while the second only does in for the dependent table:
The foreign key names they generate are as follows:
If you later try to drop a foreign key and don’t use the same exact format as you did when setting it up, you will encounter errors – usually something like:
The object ‘FK_dbo.Orders_dbo.Customers_CustomerID’ is dependent on column ‘CustomerID’.
ALTER TABLE DROP COLUMN CustomerID failed because one or more objects access this column.
So the moral here is to be very consistent with your foreign key naming scheme. If you’ve got an old database that you’ve added code first to after the fact, you’ll probably have a lot of relationships that don’t use the schema name in the key name, so you’ll run into this frequently if you’re modifying your schema.
The title is a bit of a mouthful, but I’ve recently encountered a situation where I had multiple SSL certificates I wanted bound to two different domains being hosted on the same server with the same IP address.
Using IIS’s Internet Information Services Manager UI application in IIS 7 (not sure if this applies to newer versions), you can assign a binding for an SSL certificate to port 443, but you can only enter an IP address and not the host-header information:
In order to accomplish this, you have to use command line tools. Below is a great resource I found that helped me solve this problem. I’ll pull out the most relevant command:
appcmd set site /site.name:”MySubDomainSite” /+bindings.[protocol=’https’,bindingInformation=’*:443:mysubdomain.mysite.com’]
In this example, “MySubDomainSite” is the site you have defined in IIS for the subdomain (or domain) where you are trying to assign the second certificate.
I’m a big fan of Microsoft’s Office 365 offerings, and one of my favorite components of that is the OneDrive storage that comes with each user. It’s a great way to make sure files are always backed up and available anywhere you have an internet connection.
One of the challenges you can encounter when using OneDrive for Business is that not all characters that are valid for filenames in a Windows environment are valid for use in OneDrive (and Sharepoint Online files for that matter). According to this article, the list of invalid characters are: \ / : * ? ” < > | # %.
There are several other restrictions (such as file size and name length), but the invalid characters issue is the one I encounter the most frequently, especially when I migrate a user who has a lot of files to OneDrive for Business. In one case, there were over 1,000 files with invalid characters, and I wasn’t about to rename those by hand.
Looking online, there are several great resources for Powershell scripts to solve this problem, although some were out of date and did not accurately reflect the current restrictions of the files files. In the end, there were two sources I liked a lot and I combined them into a single script and modified that. It’s not a perfect or complete script, and I don’t consider myself even proficient with Powershell. However, this script has worked well for me.
First, the sources:
Use PowerShell to check for illegal characters before uploading multiple files into SharePoint
Fix file names for Skydrive Pro syncing
I liked that the second one outputs to a TSV file that you can open with Excel and review. The first source just prints to the console, and when you have a ton of files, that isn’t very useful. Because of the output to the file, I was able to tweak things a little more, adding in a few additional things, such as searching for files with “%20” in the name (html encoded space).
I noticed that I ran into some errors when files were deeply nested – for example if you have to rename a file in a folder that also was renamed. Re-running the script a few times fixed that problem for me (each time it would fix one more layer of folders – so if you had a file inside 3 levels of folders that also needed renaming, it would take four passes in order to complete everything.
Every once in a while I need to get a list of just the file names in a directory. Sometimes there are a LOT of files and it would be a pain in the ass to type them out or write a program to manipulate them in some way.
The quick way to do this in Windows (images below are for Windows 10) is as follows:
- Open a windows explorer window (shortcut: hit Win + e, where “Win” is the Windows key) and navigate to the folder containing the files you want to print out
- Hold the Shift key and right-click in the folder (don’t click on any actual files) and choose the option “Open Powershell Window here”
- In the powershell window that opens, type
That’s it! now it prints a list of file names that you can copy/paste from the powershell window into something else (Excel, notepad, word, whatever)
Alternate method: command prompt
- Open a command prompt (Hit the Windows key, type “cmd” and hit enter)
- Navigate to the folder containing the images (type cd “<your file path>”, for example: “C:\Users\Public\Public Pictures\Sample Pictures”)
- Type the command below:
- If you’re using Windows 10, you can simply copy/paste from the command prompt. With an older version of windows, you need to:
- Right click in the command window and choose “Select All”
- Hit the Enter key. This will copy the contents of the command prompt to your paste buffer so you can paste using ctrl + v or right-click -> Paste
I’ve owned a small Synology Diskstation for a few years and really love its features and capabilities, especially considering its cost. One of the primary roles of my DiskStation is to backup my home computers. After recently purchasing a Surface Pro 4 and applying the Creators’ update, I was having trouble connecting to my DiskStation running DiskStation 6.1. After reading quite a few posts online about different problems, it seems the solution I needed was really quite basic.
What was most curious about this problem was that I could not see my DiskStation appear under “Network” under Windows explorer, and I received error code 53 (system error 53 has occurred. The network path was not found) when I tried to map the network drive using the command prompt like so:
net use T: \\DiskStation
Performing nbtstat -c from the command line and net view both listed my DiskStation with the UNC I was expecting, and the correct IP (I have mine configured as a static) in the case of the nbtstat command.
First, I made sure the SMB settings on the DiskStation were set to allow from SMB 1.0 to SMB 3.0 (DiskStation Control Panel -> File Services -> SMB -> Advanced Settings -> Maximum/Minimum SMB Protocol Settings).
Then, in Windows, if I opened Explorer and navigated to the IP address or the UNC sharename (\\DiskStation, for example), it would prompt me for a password. This was the primary point of failure for me earlier – I had forgotten that when logging into another server, whether it’s a Synology DiskStation or a Windows Server, you have to provide the server name AND the account name (or domain name/Account name, in the even that you’re connected to a domain).
So the Username wasn’t just “MyUsername” it was “DiskStation\MyUserName”. Once I did that, my DiskStation appeared under Network.