“Fix” is a bit of a loose term for this solution because it will restore your computer to a workable state without losing any files or applications, but you have to disable Windows Updates so the affected update doesn’t continue to be downloaded over and over again.
Update: There is now a patch out to fix this: https://support.microsoft.com/en-us/help/4090913/march5-2018kb4090913osbuild16299-251. Note that when I checked for updates using Windows Update, it downloaded the bad patch again, so I uninstalled that and downloaded the patch linked from the article above (Actual link to download the update is http://www.catalog.update.microsoft.com/Search.aspx?q=KB4090913).
If you encounter the blue screen of death with the message “Inaccessible Boot Device” and a frowny face, you’ve probably just updated Windows and the computer attempted to reboot.
I will not take any credit for this fix, as I spent several days looking for ways to resolve this problem when it affected machines in my office. A huge thanks goes to reddit user zosan for providing this excellent, step-by-step guide:
How-To: Fix "INACCESSIBLE_BOOT_DEVICE" caused by Win10 Spectre builds 16299.125 or 16299.192. from pcgaming
Hopefully Microsoft resolves this sometime soon so updates don’t have to be disabled too long.
Converting SQL data types can be a bit finicky, and, at least for this guy, converting a stored, large integer value to a string is not intuitive at all.
I mostly run into this when I import values from some data source like an Excel sheet that stores values like tracking numbers as a float. From there, I usually write a cursor to update tables in my system with these values, and when those tables use a column type of varchar or nvarchar, you have to convert from float type to varchar
One would think that using CAST(varchar(50), TrackingNumber) would do the trick, but when this cast is made, the value is stored in scientific notation.
The real trick is to first convert the int value to a bigint and THEN convert it to a varchar, as shown below:
CONVERT(varchar(50), CONVERT(bigint, TrackingNumber))
Earlier this year, I built a .Net Core Web Application and deployed it on IIS 7.5. I noticed right away that it was extremely slow on initial load. This was confusing because I tested deployment to Azure and the performance was great. I struggled to figure out why it loaded so slowly but all of my .Net Framework sites ran quickly.
I struggled to find an answer for a long time, but while reading through documentation while setting up another application I came across this bit of information:
You can make use of the preloading feature to have applications running before users connect. In your ApplicationHost.config, add the preloadEnabled attribute to the <application> element associated with the application. The application node is a child element of the sites node:
<site name="Default Web Site" id="1">
<application path="/rssbus" applicationPool="DefaultAppPool" preloadEnabled="true">
When PreloadEnabled is set to true, IIS will simulate a user request to the default page of the website or virtual directory so that the application initializes.
While it technically is a bit of a workaround since it doesn’t solve the root problem I experience with preloading, it has kept my application loading quickly since I enabled it.
TLDR; summary of the fastest way to enable TLS 1.2 on IIS 7.5:
- Download IIS Crypto at https://www.nartac.com/Products/IISCrypto/
- Run the executable on your server
- On the user interface, click the “Best Practices” button (located at bottom left)
- Click “Apply” (located at bottom right)
- Reboot Server
The full details:
Today I was contacted by a third-party company that exchanges data with mine and they informed me that they were requiring TLS 1.2 connections as of the new year. Reviewing information about my server’s crypto configuration, I found that, indeed, TLS 1.1 and TLS 1.2 were not enabled.
In setting out to resolve the problem, I ran across a couple of posts that talked about updating registry keys and doing some other messy stuff. And then, I found this post on ServerFault about an awesome tool called IIS Crypto.
From the IIS Crypto website:
IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website
Not only is the tool free, it doesn’t even install anything on your machine.
After downloading and running, I looked over the list of available protocols, ciphers, etc. They provide a “Best Practices” button which enables only the protocols, ciphers, etc. that should be enabled using, well, current best practices. This is another awesome feature because the list of everything to review is fairly extensive and not having to do the research myself on these is a huge time saver.
On the program’s menu is a “Site Scanner” tool that will open up a browser and analyze your site. You can use this without running the application. The URL is:
https://www.ssllabs.com/ssltest/analyze.html?d=<your site>&hideResults=on (where <yoursite> is the website you want to analyze)
The analyzer checks your certificate(s), available protocols, and cipher suites, performs handshake simulations with a bevy of operating system / user-agent combinations (well over 50), and analyzes against various attacks. When I first ran the test, the results weren’t so great – there were a number of problems related to my crypto settings.
After reviewing the analyzer, I applied the “Best Practices” settings and restarted the server. Once the server booted back up everything was working and I passed the scanner with flying colors.
For reference, I was working with IIS 7.5 running on Windows Server 2008 R2.
I love Azure. It’s a great platform and I’m very happy with the continuing evolution of products and services offered. If you ever have to move resources to a different subscription, there are a lot of little things you have to think about, because sometimes settings are tied to a particular subscription or resource group (which is tied to a subscription).
Some of the non-profit organizations I’ve built applications for have taken advantage of Microsoft’s donation offerings, where they receive Microsoft products and services at a heavily discounted rate. However, these subscriptions often come with a time limit, after which they must be purchased again. When that happens, a new Azure subscription is created and you have to reassign any resources that are under the old subscription to the new one.
The easy part is actually reassigning the subscriptions. There are two ways I see to do this:
- Create a new resource group, assigning it to the new subscription ID, and then assign all of the resources you would like to that group
- Move the existing resource group to a new subscription. This works better in cases where you have resource groups well-defined
- Moving a resource group consists of choosing the resource group in the Azure portal and clicking the “Move,” as shown below:
The trickier part is figuring out any resources that may have been tied to the old resource group name or subscription. Here are a couple I have found:
- SendGrid (and likely other external/third party applications that can’t use Azure credits) cannot be migrated from one subscription to another. A new API key must be generated for the application(s) of use.
- Lets Encrypt certificates generated using the extension http://www.siteextensions.net/packages/letsencrypt (detailed in the post http://gagetrader.info/2016/09/27/lets-encrypt-azure-win/) have a couple of keys that are tied to the subscription Id and the resource group. The ones that need to be edited are (to view keys select the resource where the web job was registered from Azure portal -> Application Settings -> Keys section):
- letsencrypt:SubscriptionId – 7dbf7306-25b3-4e5a-a85a-44017efb9cc5
- letsencrypt:ResourceGroupName: (New Resource Group Name, if applicable)
After you have completed this step, you will find that the web job fails with the following message the next time it runs:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.Rest.Azure.CloudException: The client ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ with object id ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ does not have authorization to perform action ‘Microsoft.Web/sites/config/list/action’ over scope ‘/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Web/sites/MySite/config/publishingcredentials’. at Microsoft.Azure.Management.WebSites.WebAppsOperations.
This long message basically means that we need to assign the let’s encrypt principal created during the configuration of the extension with the Contributor role. This is fairly straightforward:
- Make sure Azure Powershell is installed on your machine (Open the Microsoft Web Platform Installer and find Microsoft Azure Powershell on the list if you don’t have it)
- Open Powershell as an administrator and sign in using the command:
- Make sure your new Azure Subscription Id is selected. If not, run the following command:
Select-AzureRmSubscription -SubscriptionId Your-Subscription-Id-Guid-Here
- Run the following command to assign the correct permissions to your new subscription Id:
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName Your-Service-Principal-Name-From-Extension-Setup
Once that has been completed, the job should run again and be successful. Now your SSL certificates will continue to auto-renew.
One issue I’ve come across lately while working with Entity Framework Migrations has to do with foreign key relationships. If you’ve ever done any reconfiguration of your schema, you know you probably need to update your migration files to get all the data loaded correctly. Let’s take a simple example:
Let’s say you have an Order model for all of your orders defined as such:
This is obviously a very contrived example and a real order would have a lot of other information, but it works for this example.
Now let’s say you add a new ordering customer and you want to distinguish orders by Customer (probably a good thing to do!). Here is the Customer object:
Now we need to modify our order by adding a CustomerID column. The result looks as you’d expect:
You’ll need to properly set up your mapping where ever you have that defined. There are multiple ways to do this, but the approach I prefer is to have mapping files defined separate from my POCOs (Plain old C# objects, the classes defined above) and then add them in the OnModelCreating function of my Context class like this:
Inside the constructor of my OrderMap class, this line of code will add the relationship between Customer and Order:
Now, with all that setup, if you add a migration, Entity Framework will scaffold the changes required to make all this happen.
You’ll probably want to modify a couple of things, though.
For starters, you’ll want to setup your customers and set all of your existing orders to use the Customer ID associated with the orders that already exist in the system. Also, you have to do this before adding the Foreign Key between Order and Customer because after you add the new column (it is non-nullable), all CustomerID fields will be “0” in your Orders table.
Your migration might look something like this (EF will also add some indexes, I am showing this for the sake of brevity):
You’ll need to add code to update those customers. I usually write a line of Sql like the following, and place it after the create table but before the AddForeignKey call:
This will then get everything in your database ready to add that foreign key constraint. Of course, if you don’t like hardcoding company information into your migrations (not a great practice, really), you can do this after the fact, but sometimes you already have all the data in your database and just need to move it around due to a schema change. This is, again, a bit of a contrived example.
Now, to the tricky part I really want to highlight: you have to be really consistent in the way you add foreign keys.
For example, these two lines are slightly different – the first one uses the schema name in both the dependent and principal tables, while the second only does in for the dependent table:
The foreign key names they generate are as follows:
If you later try to drop a foreign key and don’t use the same exact format as you did when setting it up, you will encounter errors – usually something like:
The object ‘FK_dbo.Orders_dbo.Customers_CustomerID’ is dependent on column ‘CustomerID’.
ALTER TABLE DROP COLUMN CustomerID failed because one or more objects access this column.
So the moral here is to be very consistent with your foreign key naming scheme. If you’ve got an old database that you’ve added code first to after the fact, you’ll probably have a lot of relationships that don’t use the schema name in the key name, so you’ll run into this frequently if you’re modifying your schema.