In an earlier post, I wrote about how to setup auto-forwarding for a user’s email if they are on an extended leave from the office. One of the interesting problems that has arisen once that employee returned was that their email folder was still showing up in my list of mailboxes in Outlook 2016, even after the forward was disabled and delegate access was confirmed as being removed/never setup in the first place.
Resolving the issue involves a little bit of Powershell, and while the script code isn’t terribly difficult, I did have to piece it together from a couple of different sources and deal with the fact that I had MFA (multi-factor authentication) enabled on my account.
In the end, I disabled MFA briefly on my account while I executed the script below. I was unsuccessful being able to connect to Exchange Online via Powershell with MFA and gave up because I didn’t think it was worth my time to troubleshoot for something that was fairly insignificant for my use case.
Below is the script I used:
Every once in a while I come across a problem where I open an Excel document and am notified of external references, even though I’m certain the document doesn’t or shouldn’t contain any external references. A common message is:
This workbook contains links to one or more external sources that could be unsafe
In order to find these references in Excel 2016, click the “Data” tab at the top. Then, under the “Queries and Connections” section, choose “Edit Links.” From there, a dialog will pop up showing any links and allowing you to check the status of the links. If the links are truly broken, checking the status should confirm that.
To break the connection, you can simply choose “break” with the appropriate connection selected. Any cell where a value was dependent on the connection was be converted to the current value, so you shouldn’t lose data by breaking the connection – it will simply no longer update along with the connected data source.
Every once in awhile I get the urge to clean up emails and get rid of a bunch of stuff I just don’t need. On one hand, it’s nice to be able to reference old emails, but on the other you risk exposing your personal information to potential evil-doers (hackers or even mining/advertiser information if you’re using a freemail account like Gmail). Do you really need those emails from 8 years ago? Probably not.
Anyway, regardless of the motivation, here are some useful filters that I use:
Gmail filters listed below are all performed by typing the unbolded text from the bullets below into your search box. You can combine them in any way you like – just put a space between each filter
- All unread email: label:unread
- All emails sent or received prior to a specific date: before:2018/1/1
- All emails without a label: -has:userlabels
- All emails not in the inbox, sent, drafts, or chat folders: -in:inbox -in:drafts -in:sent -in:chat
- All emails with attachments: has:attachment
- All emails at least as large as a certain size (in bytes – example is about 5MB): size:5242880
Outlook is a little bit different as it offers some pretty powerful features such as search folders, but filtering can still be performed by using the search box for most items
- All unread email (multiple options):
- Choose the folder you want
- On the right side of the mail pane, there is a dropdown that defaults to “All.” You can select “Unread” from the dropdown to see only unread
- Create a search folder
- On the left pane (of the default view) that shows your email folders, scroll down to the bottom of the Data File/Account where you want to view unread messages
- Look for a folder called “Search Folders”
- Right click on “Search Folders” and choose “New Search Folder”
- Select “Unread Mail” from the list in the popup window that opens and hit OK
- Drag the new “Unread Mail” search folder into your favorites (Optional)
- All emails received prior to a specific date: received:<2018/1/1
- All emails sent prior to a specific date: sent:<2018/1/1
- All emails with attachments: hasattachment:yes
- All emails at least as large as a certain size (in bytes – example is about 5MB): messagesize:>=5 MB
Converting SQL data types can be a bit finicky, and, at least for this guy, converting a stored, large integer value to a string is not intuitive at all.
I mostly run into this when I import values from some data source like an Excel sheet that stores values like tracking numbers as a float. From there, I usually write a cursor to update tables in my system with these values, and when those tables use a column type of varchar or nvarchar, you have to convert from float type to varchar
One would think that using CAST(varchar(50), TrackingNumber) would do the trick, but when this cast is made, the value is stored in scientific notation.
The real trick is to first convert the int value to a bigint and THEN convert it to a varchar, as shown below:
CONVERT(varchar(50), CONVERT(bigint, TrackingNumber))
Earlier this year, I built a .Net Core Web Application and deployed it on IIS 7.5. I noticed right away that it was extremely slow on initial load. This was confusing because I tested deployment to Azure and the performance was great. I struggled to figure out why it loaded so slowly but all of my .Net Framework sites ran quickly.
I struggled to find an answer for a long time, but while reading through documentation while setting up another application I came across this bit of information:
You can make use of the preloading feature to have applications running before users connect. In your ApplicationHost.config, add the preloadEnabled attribute to the <application> element associated with the application. The application node is a child element of the sites node:
<site name="Default Web Site" id="1">
<application path="/rssbus" applicationPool="DefaultAppPool" preloadEnabled="true">
When PreloadEnabled is set to true, IIS will simulate a user request to the default page of the website or virtual directory so that the application initializes.
While it technically is a bit of a workaround since it doesn’t solve the root problem I experience with preloading, it has kept my application loading quickly since I enabled it.
TLDR; summary of the fastest way to enable TLS 1.2 on IIS 7.5:
- Download IIS Crypto at https://www.nartac.com/Products/IISCrypto/
- Run the executable on your server
- On the user interface, click the “Best Practices” button (located at bottom left)
- Click “Apply” (located at bottom right)
- Reboot Server
The full details:
Today I was contacted by a third-party company that exchanges data with mine and they informed me that they were requiring TLS 1.2 connections as of the new year. Reviewing information about my server’s crypto configuration, I found that, indeed, TLS 1.1 and TLS 1.2 were not enabled.
In setting out to resolve the problem, I ran across a couple of posts that talked about updating registry keys and doing some other messy stuff. And then, I found this post on ServerFault about an awesome tool called IIS Crypto.
From the IIS Crypto website:
IIS Crypto is a free tool that gives administrators the ability to enable or disable protocols, ciphers, hashes and key exchange algorithms on Windows Server 2008, 2012 and 2016. It also lets you reorder SSL/TLS cipher suites offered by IIS, implement best practices with a single click, create custom templates and test your website
Not only is the tool free, it doesn’t even install anything on your machine.
After downloading and running, I looked over the list of available protocols, ciphers, etc. They provide a “Best Practices” button which enables only the protocols, ciphers, etc. that should be enabled using, well, current best practices. This is another awesome feature because the list of everything to review is fairly extensive and not having to do the research myself on these is a huge time saver.
On the program’s menu is a “Site Scanner” tool that will open up a browser and analyze your site. You can use this without running the application. The URL is:
https://www.ssllabs.com/ssltest/analyze.html?d=<your site>&hideResults=on (where <yoursite> is the website you want to analyze)
The analyzer checks your certificate(s), available protocols, and cipher suites, performs handshake simulations with a bevy of operating system / user-agent combinations (well over 50), and analyzes against various attacks. When I first ran the test, the results weren’t so great – there were a number of problems related to my crypto settings.
After reviewing the analyzer, I applied the “Best Practices” settings and restarted the server. Once the server booted back up everything was working and I passed the scanner with flying colors.
For reference, I was working with IIS 7.5 running on Windows Server 2008 R2.
I love Azure. It’s a great platform and I’m very happy with the continuing evolution of products and services offered. If you ever have to move resources to a different subscription, there are a lot of little things you have to think about, because sometimes settings are tied to a particular subscription or resource group (which is tied to a subscription).
Some of the non-profit organizations I’ve built applications for have taken advantage of Microsoft’s donation offerings, where they receive Microsoft products and services at a heavily discounted rate. However, these subscriptions often come with a time limit, after which they must be purchased again. When that happens, a new Azure subscription is created and you have to reassign any resources that are under the old subscription to the new one.
The easy part is actually reassigning the subscriptions. There are two ways I see to do this:
- Create a new resource group, assigning it to the new subscription ID, and then assign all of the resources you would like to that group
- Move the existing resource group to a new subscription. This works better in cases where you have resource groups well-defined
- Moving a resource group consists of choosing the resource group in the Azure portal and clicking the “Move,” as shown below:
The trickier part is figuring out any resources that may have been tied to the old resource group name or subscription. Here are a couple I have found:
- SendGrid (and likely other external/third party applications that can’t use Azure credits) cannot be migrated from one subscription to another. A new API key must be generated for the application(s) of use.
- Lets Encrypt certificates generated using the extension http://www.siteextensions.net/packages/letsencrypt (detailed in the post http://gagetrader.info/2016/09/27/lets-encrypt-azure-win/) have a couple of keys that are tied to the subscription Id and the resource group. The ones that need to be edited are (to view keys select the resource where the web job was registered from Azure portal -> Application Settings -> Keys section):
- letsencrypt:SubscriptionId – 7dbf7306-25b3-4e5a-a85a-44017efb9cc5
- letsencrypt:ResourceGroupName: (New Resource Group Name, if applicable)
After you have completed this step, you will find that the web job fails with the following message the next time it runs:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Functions.RenewCertificate —> Microsoft.Rest.Azure.CloudException: The client ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ with object id ‘xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx’ does not have authorization to perform action ‘Microsoft.Web/sites/config/list/action’ over scope ‘/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MyResourceGroup/providers/Microsoft.Web/sites/MySite/config/publishingcredentials’. at Microsoft.Azure.Management.WebSites.WebAppsOperations.
This long message basically means that we need to assign the let’s encrypt principal created during the configuration of the extension with the Contributor role. This is fairly straightforward:
- Make sure Azure Powershell is installed on your machine (Open the Microsoft Web Platform Installer and find Microsoft Azure Powershell on the list if you don’t have it)
- Open Powershell as an administrator and sign in using the command:
- Make sure your new Azure Subscription Id is selected. If not, run the following command:
Select-AzureRmSubscription -SubscriptionId Your-Subscription-Id-Guid-Here
- Run the following command to assign the correct permissions to your new subscription Id:
New-AzureRmRoleAssignment -RoleDefinitionName Contributor -ServicePrincipalName Your-Service-Principal-Name-From-Extension-Setup
Once that has been completed, the job should run again and be successful. Now your SSL certificates will continue to auto-renew.
I recently had a member of my organization go on maternity leave, and, because of the way that babies work, she wasn’t able to set an out of office response or a forward.
Thankfully, with Exchange Admin Center, it’s pretty easy to do both.
Setting a User’s Out of Office Response
The first thing is to essentially impersonate a user as the Office Admin:
- Navigate to Exchange Admin Center on office.com (you have to be an Administrator to do this, obviously)
- Click your Name/Icon in the very upper right-hand corner and choose “Another user…”
- Choose the user in your organization that you need to update from the popup window that follows
- On the right sidebar of the window that opens for that user, you should see “shortcuts to other things you can do”
- In that list is “Set up an automatic reply message”
That’s it. Once you’ve chosen to Set up an automatic reply message, you can format the message however you want (one for internal users and one for external users)
Forwarding a User’s emails
- Log into Office 365 Admin Center
- Choose Users -> Active Users
- Find the User whose email you want to forward and click their name
- In the sidebar that opens, expand “Mail Settings” and click “Edit” next to Email Forwarding
- Flip forwarding to On and enter the email address where the messages forward
- You can optionally choose to keep a copy of the email in the inbox for the original user
Pretty straightforward, but not something I do all that frequently, so I know I’ll forget by the next time I have to do it 🙂
I recently setup a computer for a user who likes to store a lot of files on their desktop. However, this is bad if those files aren’t being backed up. I’m a big proponent of using OneDrive, especially since my organization uses Office 365 and they give you 1TB of storage for free for each user license you purchase.
I suggested the user move their desktop files to OneDrive to give them a backup and access from other places, and they mentioned how they are more comfortable with the desktop because they get used to where certain folders and files are organized visually.
It turns out that you can map any folder to the desktop, and it’s easy:
- Open Windows Explorer (Win + e)
- Right-Click Desktop and choose “Properties”
- Click the “Location” tab
- Type the location of the directory you want to be the “Desktop” (or click “Move” and browse to the folder)
- Click OK
For the user I mentioned above, I created a folder called “Desktop” in their OneDrive for Business folder and mapped the Desktop to that location. Now there are backups and they can use the desktop as they always have.
H/T to this post for the knowledge: Can you change the location of the Desktop folder in Windows?
One issue I’ve come across lately while working with Entity Framework Migrations has to do with foreign key relationships. If you’ve ever done any reconfiguration of your schema, you know you probably need to update your migration files to get all the data loaded correctly. Let’s take a simple example:
Let’s say you have an Order model for all of your orders defined as such:
This is obviously a very contrived example and a real order would have a lot of other information, but it works for this example.
Now let’s say you add a new ordering customer and you want to distinguish orders by Customer (probably a good thing to do!). Here is the Customer object:
Now we need to modify our order by adding a CustomerID column. The result looks as you’d expect:
You’ll need to properly set up your mapping where ever you have that defined. There are multiple ways to do this, but the approach I prefer is to have mapping files defined separate from my POCOs (Plain old C# objects, the classes defined above) and then add them in the OnModelCreating function of my Context class like this:
Inside the constructor of my OrderMap class, this line of code will add the relationship between Customer and Order:
Now, with all that setup, if you add a migration, Entity Framework will scaffold the changes required to make all this happen.
You’ll probably want to modify a couple of things, though.
For starters, you’ll want to setup your customers and set all of your existing orders to use the Customer ID associated with the orders that already exist in the system. Also, you have to do this before adding the Foreign Key between Order and Customer because after you add the new column (it is non-nullable), all CustomerID fields will be “0” in your Orders table.
Your migration might look something like this (EF will also add some indexes, I am showing this for the sake of brevity):
You’ll need to add code to update those customers. I usually write a line of Sql like the following, and place it after the create table but before the AddForeignKey call:
This will then get everything in your database ready to add that foreign key constraint. Of course, if you don’t like hardcoding company information into your migrations (not a great practice, really), you can do this after the fact, but sometimes you already have all the data in your database and just need to move it around due to a schema change. This is, again, a bit of a contrived example.
Now, to the tricky part I really want to highlight: you have to be really consistent in the way you add foreign keys.
For example, these two lines are slightly different – the first one uses the schema name in both the dependent and principal tables, while the second only does in for the dependent table:
The foreign key names they generate are as follows:
If you later try to drop a foreign key and don’t use the same exact format as you did when setting it up, you will encounter errors – usually something like:
The object ‘FK_dbo.Orders_dbo.Customers_CustomerID’ is dependent on column ‘CustomerID’.
ALTER TABLE DROP COLUMN CustomerID failed because one or more objects access this column.
So the moral here is to be very consistent with your foreign key naming scheme. If you’ve got an old database that you’ve added code first to after the fact, you’ll probably have a lot of relationships that don’t use the schema name in the key name, so you’ll run into this frequently if you’re modifying your schema.