Importing SQL Azure database to SQL Express

This one frustrated me, but I have to say, it's pretty common to get issues it seems bringing a database down from SQL Azure to the on premise version. 

If you follow the normal route of exporting the database, downloading the bacpac, then importing it you might hit this error:

TITLE: Microsoft SQL Server Management Studio
------------------------------
Could not import package. Warning SQL72012: The object [data] exists in the
target, but it will not be dropped even though you selected the 'Generate drop statements
for objects that are in the target database but that are not in the source' check box.
Warning SQL72012: The object [log] exists in the target, but it will not be
dropped even though you selected the 'Generate drop statements for objects that are in the
target database but that are not in the source' check box. Error SQL72014: .Net SqlClient
Data Provider: Msg 33161, Level 15, State 1, Line 1 Database master keys without password
are not supported in this version of SQL Server. Error SQL72045: Script execution error.
The executed script: CREATE MASTER KEY; (Microsoft.SqlServer.Dac)
------------------------------
BUTTONS: OK
------------------------------

The cause? Well in this case it's not because you are not running a current version of SQL Server (in my case, 2016 SP1), but because SQL Azure supports a Master Key with no encryption specified - to resolve this you need to run a piece of T-SQL against your SQL Azure database to set the master key password BEFORE you export the database: 

ALTER MASTER KEY ADD ENCRYPTION BY PASSWORD = '<password>';

After that, export as normal and you should be able to import your database.

Getting assigned licenses in Azure AD / Office 365

This week I found myself in a position with a large number (1000+) of users assigned in an Azure Active Directory setup that was linked to Office 365; but where only a subset of these accounts actually had any assigned Office 365 offerings. To make matters worse, it was a mixture of products being assigned and the need was to standardise things and move to using Security Groups to assign the licenses instead of manual assignment.

Obviously the first thing to do was to get a feel of the current assignments; and PowerShell certainly came in handy here - and removed the need to check each user by hand.

I used the newer AzureAD PowerShell module (Install-Module AzureAD), and knocked the following script out very quickly to dump the details to screen -- this allowed me to easily work out the group memberships needed and then run through removing the manually (direct) assigned licenses.

$VerbosePreference = 'Continue'

Import-Module AzureAd -Force
Connect-AzureAD
$subscribedProducts = Get-AzureADSubscribedSku | where { $_.ConsumedUnits -ge 1}

Write-Verbose "License types are:"
$licenses = $subscribedProducts | select -expandproperty ServicePlans | Format-Table -Autosize | Out-String
Write-Verbose $licenses

$users = Get-AzureADUser -All $true
Write-Verbose ("There are " + $users.count + " users in total")

foreach ($license in $subscribedProducts)
{
Write-Output ("Looking at SKUID " + $license.SkuId + ", " + $license.SkuPartNumber)

foreach ($user in $users)
{
if ($user.AssignedLicenses.SkuId -eq $license.SkuId)
{
Write-Output ("User has this assigned - " + $user.UserPrincipalName)
}
}


foreach ($servicePlan in $license.ServicePlans)
{
Write-Output ("Service Plan: " + $servicePlan.ServicePlanId + ", " + $servicePlan.ServicePlanName)
foreach ($user in $users)
{
if ($user.AssignedPlans.ServicePlanId -eq $servicePlan.ServicePlanId)
{
Write-Output ("User has this assigned - " + $user.UserPrincipalName)
}
}
}
}

 

Release Management: The art of build promotion - Part 1

For a while now the software development industry has been pushing a release practice that is ultimately all around build promotion - that is the art of completing a build once, and then promoting these artifacts through different environments or stages as it progresses towards release point, but only changing environmental configuration as it moves. This has the excellent objective of confirming that actually what you release is what you tested. Of course, that's not entirely the case if you have employed any feature toggling and the toggles are not matched up across the environments but that's a different story!

For a while now, I have been working with this practice but there are always the odd situation that comes up from development teams on what is the best way to handle some scenarios. Such as support or emergency releases. But before I get onto them, lets have a run through on the general theory.

The whole premise of Release Management steps from the desire to automate as much of the development pipeline as possible; partly to reduce risk but more importantly to increase the speed that changes can be applied.

So you usually start off with Continuous Integration - the practice of compiling your code often - say on each Check In to your version control. This confirms that your code that you have comitted can at least be integrated with the rest.

After that you add your Unit Tests (and if you are really lucky to have enough automation, Smoke or Integration Tests) and you get Continuous Delivery. You can, in theory, be confident enough to take any PASSING build and send it to a customer. I say in theory, as this tends to not really be the practice!!

Finally you get Continuous Deployment. Some view this as the holy grail of Release practices, as in essence you are deploying constantly. As soon as a build is passing and has been tested, you lob it out the door. Customers / users get to see new features really quickly, and developers get feedback quickly - in this practice you really only fix forwards, as ultimately you don't need to do masses of regression testing manually etc so its just as quick.

Build Promotion techniques kind of appear in the last two of these - it can be used when you are able to do Continuous Delivery (you can select any build and promote it through the stages), but it can also apply for Continuous Deployment where you might allow business stakeholders to select when and what builds are deployed as long as you are confident enough they will work from a technical perspective. At worst, you use the technique (and tooling) to give you a mechanism to get business stakeholder approval before allowing a release to go to production - something that is extremely important in regulated companies. In these cases Build Promotion is an auditors dream as you should be able to clearly identify what was deployed to production environments when, and exactly what was changed.

Tooling such as VSTS / TFS make Release Management and Build Promotion easy to get into these days - and now with the web based versions its actually usable. However, it really is not a holy grail. There are some things you need to consider.

Lets assume you have deployed Release Management Build Promotion techniques on your entire process - you will end up with a serious of stages, or environments that will look something like this:

Dev -> Test -> UAT -> Production

A build drops in at the first stage, Dev, after it is compiled (or selected if you are starting your process - or pipeline - manually). From there it goes through each one sequentually, optionally needing further (manual) interaction or approval. 

Getting version one out the door on this process is easy enough. But what happens if you find a bug in version one, just as you have version two sitting at UAT getting it's final rubber stamp. What would you do? Scrub version two, make the fix in that code base and restart the whole process again? Do you scrub version two, make the fix on version ONE code base and start a full sequence of deployments for THAT version?

Hmm.

And about now you get the realisation that you have approached Release Management and Build Promotion techniques the wrong way. Instead of creating a process that can be quick and agile, you have instead created something that is about as agile as a steel girder.

[To be continued!]

Chocolatey and keeping your local feed up to date

For a while now I've been using a combination of Boxstarter and Chocolatey to help me manage and maintain by devices. But one of the snags that I have encountered is ensuring that my local, moderated feed is up to date with the public feed.

You are probably wondering ... why do I bother with a private package feed? Well, two reasons:

- Offline support; I often end up having to rebuild devices while away from home - and hotels don't have great wifi. I travel with a pre-loaded USB key with a copy of my feed (and installers) for just this reason

- Control; I like to keep control of what versions are on my devices and don't particularly like being forced onto the latest and greatest - and I like to ensure all my devices are on the same version ;)

So I wrote a very simple tool to compare my local package feed to the chocolatey public feed; you can find it on GitHub: https://github.com/aneillans/ChocoCompare

And here it is:

D:\Dev\git\ChocoCompare\ChocoCheckUpdates\bin\Debug>chococompare
No settings found, please specify your repository locations
Chocolatey Repository [https://chocolatey.org/api/v2/] (Please enter to keep):
Local Repository [] (Please enter to keep): D:\Choco\Packages
Checking package 7-Zip (Install); local version is 16.02.0.20160811;
remote version is 16.02.0.20160811
Checking package Beyond Compare; local version is 4.1.6.21095;
remote version is 4.1.8.21575

Update available for Beyond Compare to 4.1.8.21575
Checking package Boxstarter; local version is 2.8.29;
remote version is 2.8.29

Checking package Boxstarter Bootstrapper Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Chocolatey Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Common Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter HyperV Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter WinConfig Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Chocolatey; local version is 0.10.0; remote version is 0.10.1
Update available for Chocolatey to 0.10.1
Checking package ChocolateyGUI; local version is 0.13.2; remote version is 0.13.2
Checking package dashlane; local version is 4.1.1.10306; remote version is 4.1.1.10306
Checking package Fiddler; local version is 4.6.2.29442; remote version is 4.6.2.29442
Checking package Google Chrome; local version is 52.0.2743.116;
remote version is 53.0.2785.116
Update available for Google Chrome to 53.0.2785.116
Checking package Windows Management Framework and PowerShell;
local version is 5.0.10586.20151218; remote version is 5.0.10586.20151218
Checking package RSAT 1.0.5; local version is 1.0.5; remote version is 1.0.5
Checking package SQL Server Management Studio; local version is 13.0.15700.28;
remote version is 13.0.15800.18
Update available for SQL Server Management Studio to 13.0.15800.18
Checking package Sublime Text 3; local version is 3.0.0.3114; remote version is 3.0.0.3114
Checking package Sysinternals; local version is 2016.07.29; remote version is 2016.08.29
Update available for Sysinternals to 2016.08.29
Checking package Visual Studio 2015 Enterprise Update 3; local version is 2015.03.01;
remote version is 2015.03.02

Update available for Visual Studio 2015 Enterprise Update 3 to 2015.03.02
Finished checking packages; there are 6 packages to update.

Building a .NET App and need low cost cloud logging?

One of the most frustrating things with being an app developer is getting log files from the applications you've built and are deployed (although, don't forget, if you are deploying to end users you MUST get their permission!).

Here's an easy, and cost effective, way to get around this problem.

Simply use NLog in your app, then checkout this NLog to Azure Tables extension.

Perfect!

Introduction to Credential Federation

There are many misconceptions and misunderstandings around how credential federation works. This section should outline the (general) practice of federation, as well as detail the transaction flows in an attempt to clarify the technical piece.

For a web application, in a traditional authentication approach you go to the webpage and are prompted by the given application to login. Your details are then checked against details they hold (hopefully securely!). However, this means that you are likely to have a different set of credentials to remember for every service that you use online. For an enterprise, this can pose a challenge not only for the users, but also the administrators when it comes to removing user access when people move on to new pastures.

Federation adds solutions to this by integrating online services with the on premise active directory (or other) identity platform.

The moving parts

Federation Identity Server: This is the server that you, as the enterprise, will deploy on your network and validate the credentials with. This server will also serve up the login prompt to your users so you can usually brand this as needed. Usually only accessible over HTTPS (for obvious reasons I’m sure).

Relying Party: This is the third party that is going to consume the claims that your Federation Identity Server generates. Upon registering, this party will give you a key to use to sign your claims with allowing it to be 100% sure that you sent the claims. You need to take great care of this key, otherwise you might as well hand over your authentication database to whomever has it (they can pretend to be anyone on this party you see).

Claims: A set of attributes that denote things like name, email address, role, etc. – pretty flexible and normally configured as needed by the relying party and generally are attributes from your authentication system. These are cryptographically signed to verify the originator. You’ll probably hear the term SAML around this particular area.

So how does all this work?

The easiest way to explain it is to describe a user’s journey logging on via a federated approach.

1. User visits the federated identity login page for the given cloud application
Sometimes this is the same login they would normally go to, and the application will “detect” they are federated when they enter their username.
2. Web app redirects them to your federated identity server’s login page
3. User logs in
4. Federated identity server validates the identity, and generates claims.
These are often embedded in a response page after login, as a hidden form which is then submitted back to the relying party application
5. User is redirected back to the relying party application where the claims are processed
6. User receives a relying party authentication token as if they had logged in locally

Common myths

Myth: Credentials are transferred to the relying party.
Truth: In most federation claims are sent to the relying party cryptographically signed by a key that is validated by the relying party. This allows it to be confident that only the federated identity server generated the claims it has received and therefore that it can trust them.

Myth: The federation identity server is safe from attack and is not exposed.
Truth: In order to be useful – i.e. contactable – the federation identity server has to be internet accessible – unless you restrict your users to login with federation from specific locations which pretty much renders it useless.

Surface Pro 4 - Impressions

For the past few weeks I've been using a Surface Pro 4 -- something I picked up just before I headed to San Francisco for Microsoft Build 2016.

Why did I opt for the Surface Pro 4 over the Surface Book? I already have a very functional laptop (an Macbook Pro actually), and I wanted a tablet that was actually a functional device (and I could handwrite on!). The Surface Pro gave me more bang for buck in this regard.

It's even powerful enough to let me play Prison Architect ;) (And, of course, run a small dev lab in HyperV)