Release Management: The art of build promotion - Part 1

For a while now the software development industry has been pushing a release practice that is ultimately all around build promotion - that is the art of completing a build once, and then promoting these artifacts through different environments or stages as it progresses towards release point, but only changing environmental configuration as it moves. This has the excellent objective of confirming that actually what you release is what you tested. Of course, that's not entirely the case if you have employed any feature toggling and the toggles are not matched up across the environments but that's a different story!

For a while now, I have been working with this practice but there are always the odd situation that comes up from development teams on what is the best way to handle some scenarios. Such as support or emergency releases. But before I get onto them, lets have a run through on the general theory.

The whole premise of Release Management steps from the desire to automate as much of the development pipeline as possible; partly to reduce risk but more importantly to increase the speed that changes can be applied.

So you usually start off with Continuous Integration - the practice of compiling your code often - say on each Check In to your version control. This confirms that your code that you have comitted can at least be integrated with the rest.

After that you add your Unit Tests (and if you are really lucky to have enough automation, Smoke or Integration Tests) and you get Continuous Delivery. You can, in theory, be confident enough to take any PASSING build and send it to a customer. I say in theory, as this tends to not really be the practice!!

Finally you get Continuous Deployment. Some view this as the holy grail of Release practices, as in essence you are deploying constantly. As soon as a build is passing and has been tested, you lob it out the door. Customers / users get to see new features really quickly, and developers get feedback quickly - in this practice you really only fix forwards, as ultimately you don't need to do masses of regression testing manually etc so its just as quick.

Build Promotion techniques kind of appear in the last two of these - it can be used when you are able to do Continuous Delivery (you can select any build and promote it through the stages), but it can also apply for Continuous Deployment where you might allow business stakeholders to select when and what builds are deployed as long as you are confident enough they will work from a technical perspective. At worst, you use the technique (and tooling) to give you a mechanism to get business stakeholder approval before allowing a release to go to production - something that is extremely important in regulated companies. In these cases Build Promotion is an auditors dream as you should be able to clearly identify what was deployed to production environments when, and exactly what was changed.

Tooling such as VSTS / TFS make Release Management and Build Promotion easy to get into these days - and now with the web based versions its actually usable. However, it really is not a holy grail. There are some things you need to consider.

Lets assume you have deployed Release Management Build Promotion techniques on your entire process - you will end up with a serious of stages, or environments that will look something like this:

Dev -> Test -> UAT -> Production

A build drops in at the first stage, Dev, after it is compiled (or selected if you are starting your process - or pipeline - manually). From there it goes through each one sequentually, optionally needing further (manual) interaction or approval. 

Getting version one out the door on this process is easy enough. But what happens if you find a bug in version one, just as you have version two sitting at UAT getting it's final rubber stamp. What would you do? Scrub version two, make the fix in that code base and restart the whole process again? Do you scrub version two, make the fix on version ONE code base and start a full sequence of deployments for THAT version?

Hmm.

And about now you get the realisation that you have approached Release Management and Build Promotion techniques the wrong way. Instead of creating a process that can be quick and agile, you have instead created something that is about as agile as a steel girder.

[To be continued!]

Getting Windows 10 IoT for RPi2 ... without a physical Windows box...

Windows 10 IoT edition has been available for a while now, but I have only just gotten around to looking at deploying it to my Model B Raspberry Pi 2. I had figured this would be a simple matter for downloading an image and then flashing it onto the SD card, job done. But it seems Microsoft have taken a different route which seems to require a Physical Windows 10 box to successfully flash the card.

Now, I don't have access to this at home - all my Windows machines are virtualised, and my physical machines all run OS X.

After much googling, and many dead ends, the general process I found here https://www.raspberrypi.org/forums/viewtopic.php?f=105&t=108979 posted by MikeAtWeaved worked.

Generally this was:

- Grab the IoT download, and get the flash.ffu file
- Download ImgMount Tool: http://forum.xda-developers.com/showthread.php?t=2066903
- Download DiskInternals Linux Reader: http://www.diskinternals.com/download/

Using ImgMount, mount the flash.ffu file - it will appear empty, but ignore this.

Using Linux Reader, select the Virtual Disk and tell it to Create an Image (.img extension).

Compress this img and move it onto your Mac OS X machine. Decompress it.

Then use dd at a terminal (sudoed of course) to write this onto an SD Card.

I am, however, very surprised that the PowerShell Remoting services are enabled on HTTP and the Device Management web page (on port 8080) is HTTP only too. Why no SSL by default?

Umbraco 7 on Azure Website

This weekend I decided to finally get around to moving two static websites from being a Website MVC Project in Visual Studio to something that the wife could look after - so I set about rebuilding them in Umbraco.

Considering it's been a while since I used Umbraco, I was tempted to download it and create a new project, throwing it in via Nuget. But then I noticed it was on the Azure Website Gallery. A few clicks later, a website is working (New, Website, From Gallery, Umbraco), and even running on a free Web SQL Database (20 Mb limit) -- however, I do wonder how this is going to pan out, as the Web tier is due to be retired this September. 

Once there was a basic instance running, the next thing is to obviously starting building out the templates and document types - and creating the sites (I run both sites off a single instance of Umbraco, and just use the Hostname mapping feature it has).
The problem appeared when it came to FTPing to the instance - I keep forgetting that the nodes (in this case running in a cluster) take a while to synchronise - so if you edit you need to wait ... Or, as I found out, it's easier to enable Monaco (Visual Studio Web editor) that works with Azure - that way you can do the edits, hit save and its instant.

Finally, a quick upgrade by grabbing 7.2.4 from the Umbraco website and replacing bin, Umbraco, Umbraco_Client and the Views\Partials\Grid folder and I was done.

My only grip? The Azure installer doesn't give any indication, nor method, for changing the configuration thats held in the Config\umbracoSettings.config file - some of which (like mailserver) users probably want to alter easily without messing with FTP.


BlogEngine.NET and MySQL Site Map Provider (.NET Connector)

I just encountered an error that had me stumped for a short while - I installed the .NET MySQL Connector onto one of my servers and suddenly my installation of BlogEngine.Net ceased to work - I was unable to login to the admin, and just encountered the yellow screen of death.

After a little digging, I identified that the MySQL Connector had modified the machine.config, and added it's Site Map Provider into the list. As this wasn't configured, it was throwing an exception in the BlogEngine.NET code...

The fix? Simply adding <clear /> to the siteMap Providers list within the system.web block in the root web.config and the site sprang back into life!

Octopus Deploy and Proget

This weekend I switched my local Octopus Deploy server to use the ProGet as the package repository.

Generally speaking, it was pretty painless switch - but I was getting errors until I added an advanced MSBuild Argument (/p:OctoPackPublishApiKey=BuildKey) on to provide an API Key; obviously you need to configure this in ProGet :)
Initially I didn't think I would need to provide this API Key as the user the build agent was running under a user account that had full access to the feeds; it seems, however, that this is not the case when you are running normal authentication (i.e. not domain joined).

TFS 2013 Update 2 - Problems with oi Pages (TF400898)

I've been experimenting testing TFS 2013 Update 2 over the last few days, and encountered a couple of issues.
The first appears to be a problem that is specific to a Team Project Collection where the Activity Log (accessed via _oi) does not render correctly (progressing this on with MS Support), however, the second one is slightly more interesting.

If you go into the /_oi interface, select Job Monitoring, pick a job, you go to the detail page. Only now you also get something unfriendly.


Confirmed this on a clean install of Server 2012 R2, with TFS 2013 Update 2, so it seems that this is a breaking "change".
Hopefully a hot fix comes out soon for this one (and maybe the Activity Log issue).

Update: Just as I'm posting this, I get a call back for MS Support.  Both issues confirmed as bugs, and will be fixed in a future release. Issues that you might encounter with the Activity Log should automatically clear up after a month or two as old entries are purged - so if you encounter a TF400898 here you will probably have to put up with it for a while! 

Updating Assembly Versions During TFS Builds

An article of mine has been published on CodeProject - http://www.codeproject.com/Articles/705482/Updating-Assembly-Versions-During-TFS-Builds.

In this article I explain how to modify the AzureContinuousDeployment workflow so that your hosted builds version stamp (i.e. update things like the revision number in the build number to correctly reflect your changeset number); however, this approach can easily be adapted to fit an on-premise TFS installation.

Using the AzureContinuousDelivery Build Process Template on your own server

Wanting to deploy to Azure using Continuous Delivery but not use the Visual Studio Hosted Build servers?

No problem; but you need to install the right version of things first!

I installed:

 

I then installed a couple of the assemblies from the Azure SDK Lib's into the GAC; these get installed into C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\v2.0\ref, and I installed:
Microsoft.ServiceBus
Microsoft.WindowsAzure.Storage
Microsoft.WindowsAzure.Configuration

If you don't do this, you get an error during the deployment portion of the workflow.

If you try it, let me know how you get on!

Visual Studio 2013 - Feedback tool

Those of you that use Visual Studio in a medium to large company that's worried about data security need to consider the addition of the Feedback tool that Microsoft are bundling with Visual Studio.

 

While I do applaud Microsoft for apparantly wanting to engage more with people who actually use their product, I have to worry about this feature.

 

Why?

 

Simple. It takes screenshots - and not just limited to the Visual Studio windows which would be bad enough in some circumstances, but all your desktops.

 

 

The good news is you can disable it pretty easily. The bad news is you'll have to ensure a registry key deletion occurs every time a user logs on to your network, as I've seen it occasionally reappear. Nightmare; no easy way to remove it centrally, block it or otherwise censor. Not ideal for an enterprise.

How do you drop this item from the menu?

Delete this key:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\12.0_Config\MainWindowFrameControls\{F66FBC48-9AE4-41DC-B1AF-0D64F0F54A07}

Are Microsoft loosing their way with user experience?

Over the past few months there has been a fair bit in the industry press about Apple and Microsoft apparently loosing their respective grasps on the market.

Apple has been (from what it appears) struggling to get a new product into the market, and is instead just refreshing existing lines.

Microsoft is slipping in its game too - the latest gaff has to be the way it's handled Windows 8.1. The recent U-turn about not releasing the RTM version to the development and sys admin community was welcome, but it is a disagreeable find to discover that you can not do a straight upgrade of Windows 8 to 8.1. Instead it enforces a clean install, while offering you the ability to save your files. According to all the recent MSDN coverage (literally in the last 24 hrs or so), the "release" version will support this, but I have to ask why Microsoft decided to release essentially an incomplete version?

Historically the whole idea of RTM or Gold copies were that these were the images that were sent for physical manufacture. These days things still follow this pattern, but now companies such as Microsoft tend to "tweak" things before they are actually released - in theory to provide a better quality product, but in this case you have to wonder. Releasing an incomplete, inferior product a month before the General Availability is annoying, and down right disruptive to the development community. Microsoft's attitude? It doesn't matter. You shouldn't be using this release for anything except testing, so why can't you build a new machine. 

Has the development industry lost track of what should happen for release cycles? Are we now expecting too much, too quickly from software companies?

Or have the big boys (Apple and Microsoft for example) started to loose their way with handling user experience?