Getting assigned licenses in Azure AD / Office 365

This week I found myself in a position with a large number (1000+) of users assigned in an Azure Active Directory setup that was linked to Office 365; but where only a subset of these accounts actually had any assigned Office 365 offerings. To make matters worse, it was a mixture of products being assigned and the need was to standardise things and move to using Security Groups to assign the licenses instead of manual assignment.

Obviously the first thing to do was to get a feel of the current assignments; and PowerShell certainly came in handy here - and removed the need to check each user by hand.

I used the newer AzureAD PowerShell module (Install-Module AzureAD), and knocked the following script out very quickly to dump the details to screen -- this allowed me to easily work out the group memberships needed and then run through removing the manually (direct) assigned licenses.

$VerbosePreference = 'Continue'

Import-Module AzureAd -Force
Connect-AzureAD
$subscribedProducts = Get-AzureADSubscribedSku | where { $_.ConsumedUnits -ge 1}

Write-Verbose "License types are:"
$licenses = $subscribedProducts | select -expandproperty ServicePlans | Format-Table -Autosize | Out-String
Write-Verbose $licenses

$users = Get-AzureADUser -All $true
Write-Verbose ("There are " + $users.count + " users in total")

foreach ($license in $subscribedProducts)
{
Write-Output ("Looking at SKUID " + $license.SkuId + ", " + $license.SkuPartNumber)

foreach ($user in $users)
{
if ($user.AssignedLicenses.SkuId -eq $license.SkuId)
{
Write-Output ("User has this assigned - " + $user.UserPrincipalName)
}
}


foreach ($servicePlan in $license.ServicePlans)
{
Write-Output ("Service Plan: " + $servicePlan.ServicePlanId + ", " + $servicePlan.ServicePlanName)
foreach ($user in $users)
{
if ($user.AssignedPlans.ServicePlanId -eq $servicePlan.ServicePlanId)
{
Write-Output ("User has this assigned - " + $user.UserPrincipalName)
}
}
}
}

 

Office 365 - UK Data Residency deadline

Microsoft recently announced the deadline for current Office 365 user's to request relocation of their data to their UK Azure data centres.

To do this, a service administrator should login to the Office 365 portal, select Settings then Organization Profile.  Scroll down and you will find the Data Residency option box where you can now elect to have the data residing in the UK.

The deadline for requesting the move is 15th September 2017, and the move itself will apparently be completed within 24 months of the above deadline. That's a fair wait!

It should also be noted that this only applies to "Core" data that will be moved - although finding clarity on that particular topic is challenging.  More details can be found here.

Release Management: The art of build promotion - Part 1

For a while now the software development industry has been pushing a release practice that is ultimately all around build promotion - that is the art of completing a build once, and then promoting these artifacts through different environments or stages as it progresses towards release point, but only changing environmental configuration as it moves. This has the excellent objective of confirming that actually what you release is what you tested. Of course, that's not entirely the case if you have employed any feature toggling and the toggles are not matched up across the environments but that's a different story!

For a while now, I have been working with this practice but there are always the odd situation that comes up from development teams on what is the best way to handle some scenarios. Such as support or emergency releases. But before I get onto them, lets have a run through on the general theory.

The whole premise of Release Management steps from the desire to automate as much of the development pipeline as possible; partly to reduce risk but more importantly to increase the speed that changes can be applied.

So you usually start off with Continuous Integration - the practice of compiling your code often - say on each Check In to your version control. This confirms that your code that you have comitted can at least be integrated with the rest.

After that you add your Unit Tests (and if you are really lucky to have enough automation, Smoke or Integration Tests) and you get Continuous Delivery. You can, in theory, be confident enough to take any PASSING build and send it to a customer. I say in theory, as this tends to not really be the practice!!

Finally you get Continuous Deployment. Some view this as the holy grail of Release practices, as in essence you are deploying constantly. As soon as a build is passing and has been tested, you lob it out the door. Customers / users get to see new features really quickly, and developers get feedback quickly - in this practice you really only fix forwards, as ultimately you don't need to do masses of regression testing manually etc so its just as quick.

Build Promotion techniques kind of appear in the last two of these - it can be used when you are able to do Continuous Delivery (you can select any build and promote it through the stages), but it can also apply for Continuous Deployment where you might allow business stakeholders to select when and what builds are deployed as long as you are confident enough they will work from a technical perspective. At worst, you use the technique (and tooling) to give you a mechanism to get business stakeholder approval before allowing a release to go to production - something that is extremely important in regulated companies. In these cases Build Promotion is an auditors dream as you should be able to clearly identify what was deployed to production environments when, and exactly what was changed.

Tooling such as VSTS / TFS make Release Management and Build Promotion easy to get into these days - and now with the web based versions its actually usable. However, it really is not a holy grail. There are some things you need to consider.

Lets assume you have deployed Release Management Build Promotion techniques on your entire process - you will end up with a serious of stages, or environments that will look something like this:

Dev -> Test -> UAT -> Production

A build drops in at the first stage, Dev, after it is compiled (or selected if you are starting your process - or pipeline - manually). From there it goes through each one sequentually, optionally needing further (manual) interaction or approval. 

Getting version one out the door on this process is easy enough. But what happens if you find a bug in version one, just as you have version two sitting at UAT getting it's final rubber stamp. What would you do? Scrub version two, make the fix in that code base and restart the whole process again? Do you scrub version two, make the fix on version ONE code base and start a full sequence of deployments for THAT version?

Hmm.

And about now you get the realisation that you have approached Release Management and Build Promotion techniques the wrong way. Instead of creating a process that can be quick and agile, you have instead created something that is about as agile as a steel girder.

[To be continued!]

Upgrading lab to Server 2016

As Server 2016 is now in GA, I figured I'd have a shot and upgrade my lab from 2012 R2 to 2016.

The majority of the machines were completed as an in-place upgrade.

Before the upgrade:

- AD Server, also holding ADFS and WSUS (server core)
- Applications Server (full fat desktop)
- SQL Server (server core)
- Web Application Proxy (server core)

After the upgrade:

- AD Server (server core)
- ADFS Server (full fat desktop)
- Applications Server (full fat desktop)
- SQL Server (server core)
- Web Application Proxy (server core)

All 2012 R2 boxes were fully patched before upgrading; ADFS was split out onto a seperate box as the upgrade is not capable of an in-place upgrade of this element - the process as documented here is fantastic for migrating this. I still haven't installed WSUS again but I'm planning on rebuilding the SCCM element of my lab anyway, so don't need it yet.

The only snag I've encountered is that when you do the in place upgrade for server core, you need to run Setup with a couple of parameters - /Compat IgnoreWarning - otherwise you just get a blue screen.

The Web Application Proxy server upgraded fine, but the WAP component was left in an invalid state; I just removed it and reinstalled as I was resetting ADFS anyway -- likewise Azure AD Connect needed to be reinstalled and reconnected.

Finally, a disk cleanup frees up space - just run 
dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase
although it should be noted this will prevent you reverting the upgrade etc.

Next task is to upgrade the two Hyper-V hosts for my lab ...

Chocolatey and keeping your local feed up to date

For a while now I've been using a combination of Boxstarter and Chocolatey to help me manage and maintain by devices. But one of the snags that I have encountered is ensuring that my local, moderated feed is up to date with the public feed.

You are probably wondering ... why do I bother with a private package feed? Well, two reasons:

- Offline support; I often end up having to rebuild devices while away from home - and hotels don't have great wifi. I travel with a pre-loaded USB key with a copy of my feed (and installers) for just this reason

- Control; I like to keep control of what versions are on my devices and don't particularly like being forced onto the latest and greatest - and I like to ensure all my devices are on the same version ;)

So I wrote a very simple tool to compare my local package feed to the chocolatey public feed; you can find it on GitHub: https://github.com/aneillans/ChocoCompare

And here it is:

D:\Dev\git\ChocoCompare\ChocoCheckUpdates\bin\Debug>chococompare
No settings found, please specify your repository locations
Chocolatey Repository [https://chocolatey.org/api/v2/] (Please enter to keep):
Local Repository [] (Please enter to keep): D:\Choco\Packages
Checking package 7-Zip (Install); local version is 16.02.0.20160811;
remote version is 16.02.0.20160811
Checking package Beyond Compare; local version is 4.1.6.21095;
remote version is 4.1.8.21575

Update available for Beyond Compare to 4.1.8.21575
Checking package Boxstarter; local version is 2.8.29;
remote version is 2.8.29

Checking package Boxstarter Bootstrapper Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Chocolatey Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Boxstarter Common Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter HyperV Module; local version is 2.8.29; remote version is 2.8.29
Checking package Boxstarter WinConfig Module; local version is 2.8.29;
remote version is 2.8.29
Checking package Chocolatey; local version is 0.10.0; remote version is 0.10.1
Update available for Chocolatey to 0.10.1
Checking package ChocolateyGUI; local version is 0.13.2; remote version is 0.13.2
Checking package dashlane; local version is 4.1.1.10306; remote version is 4.1.1.10306
Checking package Fiddler; local version is 4.6.2.29442; remote version is 4.6.2.29442
Checking package Google Chrome; local version is 52.0.2743.116;
remote version is 53.0.2785.116
Update available for Google Chrome to 53.0.2785.116
Checking package Windows Management Framework and PowerShell;
local version is 5.0.10586.20151218; remote version is 5.0.10586.20151218
Checking package RSAT 1.0.5; local version is 1.0.5; remote version is 1.0.5
Checking package SQL Server Management Studio; local version is 13.0.15700.28;
remote version is 13.0.15800.18
Update available for SQL Server Management Studio to 13.0.15800.18
Checking package Sublime Text 3; local version is 3.0.0.3114; remote version is 3.0.0.3114
Checking package Sysinternals; local version is 2016.07.29; remote version is 2016.08.29
Update available for Sysinternals to 2016.08.29
Checking package Visual Studio 2015 Enterprise Update 3; local version is 2015.03.01;
remote version is 2015.03.02

Update available for Visual Studio 2015 Enterprise Update 3 to 2015.03.02
Finished checking packages; there are 6 packages to update.

Building a .NET App and need low cost cloud logging?

One of the most frustrating things with being an app developer is getting log files from the applications you've built and are deployed (although, don't forget, if you are deploying to end users you MUST get their permission!).

Here's an easy, and cost effective, way to get around this problem.

Simply use NLog in your app, then checkout this NLog to Azure Tables extension.

Perfect!

Introduction to Credential Federation

There are many misconceptions and misunderstandings around how credential federation works. This section should outline the (general) practice of federation, as well as detail the transaction flows in an attempt to clarify the technical piece.

For a web application, in a traditional authentication approach you go to the webpage and are prompted by the given application to login. Your details are then checked against details they hold (hopefully securely!). However, this means that you are likely to have a different set of credentials to remember for every service that you use online. For an enterprise, this can pose a challenge not only for the users, but also the administrators when it comes to removing user access when people move on to new pastures.

Federation adds solutions to this by integrating online services with the on premise active directory (or other) identity platform.

The moving parts

Federation Identity Server: This is the server that you, as the enterprise, will deploy on your network and validate the credentials with. This server will also serve up the login prompt to your users so you can usually brand this as needed. Usually only accessible over HTTPS (for obvious reasons I’m sure).

Relying Party: This is the third party that is going to consume the claims that your Federation Identity Server generates. Upon registering, this party will give you a key to use to sign your claims with allowing it to be 100% sure that you sent the claims. You need to take great care of this key, otherwise you might as well hand over your authentication database to whomever has it (they can pretend to be anyone on this party you see).

Claims: A set of attributes that denote things like name, email address, role, etc. – pretty flexible and normally configured as needed by the relying party and generally are attributes from your authentication system. These are cryptographically signed to verify the originator. You’ll probably hear the term SAML around this particular area.

So how does all this work?

The easiest way to explain it is to describe a user’s journey logging on via a federated approach.

1. User visits the federated identity login page for the given cloud application
Sometimes this is the same login they would normally go to, and the application will “detect” they are federated when they enter their username.
2. Web app redirects them to your federated identity server’s login page
3. User logs in
4. Federated identity server validates the identity, and generates claims.
These are often embedded in a response page after login, as a hidden form which is then submitted back to the relying party application
5. User is redirected back to the relying party application where the claims are processed
6. User receives a relying party authentication token as if they had logged in locally

Common myths

Myth: Credentials are transferred to the relying party.
Truth: In most federation claims are sent to the relying party cryptographically signed by a key that is validated by the relying party. This allows it to be confident that only the federated identity server generated the claims it has received and therefore that it can trust them.

Myth: The federation identity server is safe from attack and is not exposed.
Truth: In order to be useful – i.e. contactable – the federation identity server has to be internet accessible – unless you restrict your users to login with federation from specific locations which pretty much renders it useless.

Windows 10 - Remote Desktop - Login Failed

I've been getting an error that has been bugging me recently on my Windows 10 device - a Surface Pro 4 running the Fast Ring Insider build (at this time its Build 14342_rs1_release.160506-1708).

Trying to Remote Desktop to a Windows Server 2008 just wouldn't work -- not matter what combination of credentials I tried, it just failed with a 'Login Failed' error. Oddly, connections to other machines running Server 2012 DID work. I shrugged it off and ignored it until I got time to look at it today - and the fix amazes me.

Simple enable the Firewall exception for Remote Desktop -- fire up firewall.cpl, click allow a program through the firewall, and select Remote Desktop. Reboot, and that's it.

Must be something getting blocked around the authentication or such. But at least that's working again.

Office 365, MDM

I was experimenting with Office 365's offering of InTune last night - and made an interesting discovery.

Don't just enable it. You'll find you lock out your user's devices from accessing resources until you "fix" the policies. The default policy seems to be that any device access a 365 resource must be enrolled into the Organisation's InTune account. Probably not a bad thing, but might not play nicely with companies if you have other MDM's deployed, such as Cisco Meraki or VMWare's AirWatch. Guess I'll need to do a bit more testing here.

To disable the policy and regain access, visit the InTune page at: https://protection.office.com/#/device

Then go to Security Policies, Device Management.

Edit the Default MDM Policy by Office 365.

I think you now have two choices: Disable the Deployment or add an exclusion. Personally I did both until I work things out - Deployment, set to No, and click on the Manage Organisation Wide Device Access Settings to get to the exclusions option (I added the Default group here - which basically disabled intune!).

Office 365, Azure AD and LiveID accounts

Recently a company that I deal with has moved to Office 365 -- this has entailed moving a number of identifies into Microsoft Azure AD (think Active Directory, existing on Azure and usable for SSO activity) and signing up for Office 365 services.

No problem - or so you'd think.

When trying to signin on a Microsoft site with an enrolled Organisation account (that is, an account on the Azure AD), the OLD LiveID was being used intermittently. A quick chat with Microsoft confirmed this -- Organisation and LiveID's can exist on the SAME email address at the same time - surely a situation where it can lead to confusion!

The good news is you can disable / deactive your LiveID to drop back to just using the Org account (where possible at least), but it does mean you have to be careful during the deactivation period to ensure you sign in at the right place (best to use office365.com addresses!).

However, the world gets a bit murky if you are using MSDN it seems: https://www.visualstudio.com/get-started/setup/link-msdn-subscription-to-organizational-account-vs

Am I the only one that can see this being a total nightmare for companies with a large dev population or other LiveID usage?