Enabling TLS 1.2 on a Windows 10 development machine

I’ve noticed a lot of services lately notifying that communication using TLS 1.0 and 1.1 are going to be disabled in the near future. While I haven’t had any problems yet in my production environment (because TLS 1.2 is configured correctly), I noticed a problem in a test environment. The error message I was receiving from one particular service was:

The request was aborted: Could not create SSL/TLS secure channel

At first, I wasn’t quite sure why there was a problem, but after I received an email from the service provider about sunsetting TLS 1.0 and 1.1, it dawned on me that my dev box didn’t have TLS 1.2 properly configured.

I develop on Windows, so the way you enable TLS 1.2 on Windows (or at least the only way I found), was directly through the registry editor.

The subkeys you need to edit are as follows*:

  • Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v2.0.50727
  • Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319
  • Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v2.0.50727
  • Computer\HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\.NETFramework\v4.0.30319

For each of these, add 2 DWORDs:

  1. SystemDefaultTlsVersions (value data: 1)
  2. SchUseStrongCrypto (value data: 1)

And that’s it. As soon as I added those keys to my registry, the application on my development box started worked again

*Note: you do not need to add all four subkeys depending on whether you are running on a 32 or 64-bit machine and running 32-bit or 64-bit software. See the resources below for more details

Resources

https://support.microsoft.com/en-us/help/4040243/how-to-enable-tls-1-2-for-configuration-manager

Adding an Office 365 User / AzureAD user as a Windows Local Administrator

I really like Office 365’s use of Azure Active Directory users to allow sign on to machines that are setup to be managed by an organization. For one thing, it just makes working with Office 365 a little bit easier; in many cases, you no longer have to re-enter the Office 365 password after you’ve signed in because the sign in credentials are used for application authentication as well (Outlook is a little different, but once you’ve saved the password for the user’s email account, you shouldn’t have to enter it again).

One thing that can be a bit of a headache, though, is setting up the local machine with the appropriate permissions. If you have a user you want to give administrative access to, the old way was to go into Computer Management and add the local user to the Administrators group.

The problem with AzureAD users is they don’t get added to the users list:

AzureAD users are not added to the list of users

The easiest way I have found to give admin access is a simple script that can be run in Powershell, command prompt, the Run portal, etc:

net localgroup administrators AzureAD\UserName /add

Where “UserName” is the AzureAD user’s name (as displayed in the C:\Users folder). Make sure to run that in a prompt or Powershell terminal that has administrative rights, or you may get an access denied message.

Reseeding a table’s identity column in SQL and fixing any gaps in identity column values

Identity columns can be very useful and also a source of serious frustration. If you’ve ever written a cursor or a while loop but forgotten to fetch the next record in your loop, you know that you can blow up an identity pretty quickly if your cursor is doing inserts. Even if you’ve put your entire cursor in a transaction and a try/catch, the identity counter will not rollback in the event of a failure. If you’ve had inserts performed AFTER your cursor blew up the identity, then you’d need to change the identity column back to the next value in the original series or just live with a giant gap in your identity values.

For example, let’s say you have an orders table with an Identity column named “ID”. The current identity value (ID for last Order created) is 50,000. Now let’s say you write a cursor or loop to insert a bunch of new orders from some data source that is outside the norm, but you forget the “FETCH NEXT FROM cursor INTO … ” line in your while loop and you execute the script. After a few seconds, you think to yourself – gee this script is taking a long time to run – let’s examine the code. After looking you realize what you’ve done and stop the script. You also have to rollback the transactions. Now you fix your script and run it again. All the orders create perfectly, except the ID is 150,001 and up. You have a gap of 100,001 IDs between the last one and current.

This really doesn’t matter all that much, but if you’re very anal retentive like me and have a dataset that isn’t all that important (I wouldn’t really recommend this for a mission critical application, unless you really have to), then here is how you can get back on track as if the cursor never blew up your identity in the first place.

The basic steps are as follows:

  1. Set your table’s identity to the last good record (before you blew it up with your cursor or while loop). In the example, that would be ID 50,000
  2. If you inserted into your table after you blew up Copy the data for each record inserted after your cursor blew up your identity (If not, then skip the following steps – step 1 is good enough), you need to insert copies of all the records that were inserted
  3. After inserting these copies, you’ll need to update tables with foreign keys to the new order ID
  4. Delete the “original” orders (the ones with ID over 150,000)

You’ll want to do all of this in a transaction so you don’t screw anything up even further. Following with our example, here is a script that would accomplish this task:

DECLARE @originalOrderID int, @newOrderID int
SET @originalOrderID = 150001

DECLARE @customerID int, @warehouseID int, @createDate datetime, @approvalDate datetime, @shipDate datetime, @dueDate datetime, @addressID int

BEGIN TRANSACTION tx
BEGIN TRY
	/* Step 1 - Set table's identity to last good record. In this case, that is 50000 */
	DBCC CHECKIDENT ('[Orders]', RESEED, 50000);
	GO 

	WHILE @originalOrderID <= 150150
	BEGIN
		/* Step 2a - Copy data from records inserted @originalOrderID variable. In this loop, that will run from ID 150001 to 150150 */
		SELECT @customerID = CustomerID, @warehouseID = WarehouseID, @createDate = CreateDate, @approvalDate = ApprovalDate, @shipDate = ShipDate, @dueDate = DueDate, @addressID = AddressID
		FROM Orders
		WHERE ID=@originalOrderID
		
		/* Step 2b Insert the copied data */
		INSERT INTO Orders (CustomerID, WarehouseID, CreateDate, ApprovalDate, ShipDate, DueDate, AddressID)
		VALUES (@customerID, @warehouseID, @createDate, @approvalDate, @shipDate, @dueDate, @addressID)

		/* Step 2c - Grab the new ID created by the insert */
		SET @newOrderID = SCOPE_IDENTITY()

		/* Step 3 - Update foreign key references by replacing the original order ID with the new one we just inserted */
		UPDATE OrderItems
		SET OrderID=@newOrderID
		WHERE OrderID=@originalOrderID

		UPDATE ShippingBoxes
		SET OrderID=@newOrderID
		WHERE OrderID=@originalOrderID

		/* Step 4 - Delete the original order */
		DELETE 
		FROM Orders
		WHERE ID=@originalOrderID

		/* Don't forget to increment your variable - that's how we got in this mess in the first place */
		SET @originalOrderID = @originalOrderID + 1
	END
COMMIT
END TRY
BEGIN CATCH
	ROLLBACK
	PRINT ERROR_MESSAGE()
END CATCH

One thing to note about this – the DBCC CHECKIDENT function (MSDN docs here: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql?view=sql-server-2017) simply resets the seed to whatever you supply. The next insert will be your supplied number + 1. So for our example, the first insert after calling DBCC CHECKIDENT will be ID 50,001. Note that if you reseed your table but don’t delete any inserts above, you will run into insert conflicts down the line. Identity does not check to see if a value exists before inserting, so you will get an error if don’t fix it.

Of course, you could just leave your identity value at the new number and “mind the gap,” as the British would say, but this is an idea for how to deal with it if you choose to.

Excel Manual Calculations vs Automatic Calculations and Converting data to Numbers

Excel has the ability to switch from manual calculations to automatic. Sometimes, this happens to sheets at what seems like random. I know there have been times when I’ve been working on a sheet, close it, and the next time I open it, my formulas aren’t updating until I click into the cell and then out of it. When manual mode is enabled, formulas won’t update until you explicitly edit a cell.

Sometimes, that is a huge benefit. Consider the following: You have a sheet with 10 columns x 10,000 rows where each column contains a number stored as text and you need to convert those rows. If you do this with automatic calculations on, it will take a while to run the conversion. With manual formatting, it’s almost instantaneous. Why? Because with automatic conversion, the sheet has to determine what needs to be recalculated after the conversion of every single cell. A brief description from the Microsoft documentation on calculation modes:

Excel performance: Improving calculation performance

The smart recalculation engine in Excel tries to minimize calculation time by continuously tracking both the precedents and dependencies for each formula (the cells referenced by the formula) and any changes that were made since the last calculation. At the next recalculation, Excel recalculates only the following:

  • Cells, formulas, values, or names that have changed or are flagged as needing recalculation.
  • Cells dependent on other cells, formulas, names, or values that need recalculation.
  • Volatile functions and visible conditional formats.

Excel continues calculating cells that depend on previously calculated cells even if the value of the previously calculated cell does not change when it is calculated.

Because you change only part of the input data or a few formulas between calculations in most cases, this smart recalculation usually takes only a fraction of the time that a full calculation of all the formulas would take.

In manual calculation mode, you can trigger this smart recalculation by pressing F9. You can force a full calculation of all the formulas by pressing Ctrl+Alt+F9, or you can force a complete rebuild of the dependencies and a full calculation by pressing Shift+Ctrl+Alt+F9.

A keyboard shortcut (Windows – not sure about Mac) to switch between manual and automatic calculation modes:

  • Alt + M, X, M (to manual)
  • Alt + M, X, A (to automatic)

In order for that to work, you must continue to hold the alt key and then press M, X, then M/A.

Tip: How to Change the Default Save Directory for Attachments in Microsoft Outlook 2016 for Windows

Sometimes, you are surprised when functionality that should obviously have a configuration point does not. That is definitely the case when it comes to the default save path for Outlook attachments in Microsoft Outlook 2016

  • Open The Registry Editor
    • Hit the Windows key and type “regedit” to bring up the Registry Editor application
  • Navigate to HKEY_CURRENT_USER\SOFTWARE\Microsoft\Office\16.0\Outlook\Options
  • Create a String Value called “DefaultPath” and make the value be the folder location you want

Below is a screenshot of what your registry editor should look like after you’ve added the path:

Thanks to user KevonaOne on the Microsoft forums for this tip: Q: Office Outlook 2016: Default Save Location for Email Attachments

Using DotNetZip with a Memory Stream

A colleague of mine recently asked about the possibility of adding a download button that would allow multiple files to be added to a single archive and downloaded. A quick google search lead me to the Nuget Package DotNetZip.

This is a great package and very intuitive to use. Within thirty minutes, I had it all wired into my application and it worked as expected. There were only a couple of minor details I had to work to get everything the way I wanted.

Here is the code:

public ExportDocumentResponse DownloadFilePathsAsZip(IEnumerable<string> filePaths)
{
	if (filePaths.Any())
	{
		using (IStream stream = IStreamFactory.Create(new IStreamParameters()
		{
			StreamType = StreamTypeEnum.MemoryStream
		}))
		{
			using (Ionic.Zip.ZipFile zipFile = new Ionic.Zip.ZipFile())
			{
				foreach (var filePath in filePaths)
				{
					zipFile.AddFile(filePath, "");
				}
				zipFile.Save(stream.GetUnderlyingSource());
			}

			return new ExportDocumentResponse()
			{
				DocumentStream = stream,
				ContentType = "application/zip"
			};
		}
	}
	
	return null;
}

This method takes an IEnumerable of strings that are the full disk file paths of the files I want to add to the zip file. In my application, I have a database that stores references to these filenames, so I have some logic outside of this method that creates the full file paths that I want.

From there, it’s a matter of creating a MemoryStream, instantiating the ZipFile, adding each file to the Zip, and saving out the memory stream to the Zip File.

The ExportDocumentResponse object is a custom object I have that allows me to pass this object back to my view (in this application, I’m using an MVP pattern where object is passed back to the View, which sets up the HttpResponse headers and copies the memory stream to the HttpResponse object (via the CopyTo method). That logic looks something like:

public void HandleResponse(IHttpResponse response, ExportDocumentResponse exportResponse, string fileNameWithExtension)
{
	response.Clear();
	response.ContentType = exportResponse.ContentType;
	response.AddHeader("content-disposition", String.Format("attachment;filename={0}", fileNameWithExtension));
	exportResponse.DocumentStream.Position = 0;
	exportResponse.DocumentStream.CopyTo(exportResponse.OutputStream);
	response.End(); //ThreadAbortException will be handled by presenter
}

Also note that I have some interfaces with wrappers around common objects for the implementations (IHttpResponse, IStream) so that I can unit test these methods. Replacing IStream with a System.IO.MemoryStream instance and IHttpResponse with whatever HttpResponse object is part of your web environment (depends on whether you are using WebForms, MVC, etc) should make this solution usable without my implementations. One final note is that my IStream has a method called GetUnderlyingSource which returns a System.IO.Stream object – if replacing my code with an actual Stream implementation, the call can be simplified to just zipFile.Save(stream)

My favorite part is that the code for creating the Zip file is incredibly minimal – there are a total of four lines here that are dedicated to the library and the rest is all logic to support it and return the result to the user in my application. The one “configurable” part here is the second parameter to the AddFile method. I have input an empty string here, because I want all of the files to be placed at the root of the Zip File. When I used the variant of this method with a single parameter, the Zip File would save a folder structure that looked something like the actual file structure from where the files were located. Here is the definition of the AddFile method that pops up from intellisense:

fileName (string): The name of the file to add. The name of the file may be a relative path or a fully-qualified path.

directoryPathInArchive (string): Specifies a directory path to use to override any path in the fileName. This path may, or may not, correspond to a real directory in the current filesystem. If the files within the zip are later extracted, this is the path used for the extracted file. Passing null (Nothing in VB) will use the path on the fileName, if any. Passing the empty string (“”) will insert the item at the root path within the archive.