Deploying Cloud Foundry with VMware vSphere - Part 1

Cloud Foundry is an interesting portal / management system from EMC's Pivotal Lab's team to allow you to (essentially) manage docker and the associated infrastructure it provides.

But one of the features I love the most is that it will directly integrate with VMware vSphere and take so much of the pain away.

However, the initial setup can be a bit ... awkward, so I thought I'd document how I got it working.

First off, I assume you have a fully working vSphere setup - so ESXi hosts configured and running, along with the vSphere Appliance or installation on a windows box somewhere. I tested this with vSphere 6.

Next, give the instructions a read: http://docs.cloudfoundry.org/deploying/vsphere/
Now, my private setup for testing was no where near the minimum specifications (I run a couple of HP MicroServers at home for testing things) but this didn't stop me continuing!

One thing to bear in mind, if you have multiple ESXi servers, make sure you have a datastore mounted on them both that can be shared ...

The first main step to deploying Cloud Foundry is deploying the MicroBosh image onto vSphere.
Pretty straight forward, but some jumping around is needed. Create a folder on your machine to hold things, and create the manifest.yml as per the documentation. Go through and replace the various items (such as IP address any thing in caps) thats need to be done to make it valid -- note that you need to use a vSphere Cluster and shared storage here -- I cheated, and had a cluster with a single node and an NFS Datastore that I'd created earlier.

After that, grab MicroBOSH off the web. Installing this onto my Mac OSX literally took opening a terminal and running the command, thanks to Yosemite already coming with a ruby install - but I did need to run it sudoed!

With Bosh on your machine, the next thing is a stemcell. This is basically the "starting point" for Cloud Foundry it seems.

So ...

Downloaded the stemcell with: 

bosh download public stemcell bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz

Preped the deployment with: 

bosh micro deployment manifest.yml

When I went to deploy using: 

bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 

I got an error: 

either of 'genisoimage' or 'mkisofs' commands must be present

Fixable using: 

brew install cdrtools 

(you need homebrew installed, google it).

I also encountered an issue with the stemcell I'd picked that wasn't actually provisioning the network ... initially I thought this was a bug with the process as no errors where given, but I then discovered (after a lot of googling) I had an error in my manifest.yml file. Seems it's really sensitive and there's pretty much no validation.

Full console output of provisioning micro bosh:

iMac:micro-deployment andy$ bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 
No `bosh-deployments.yml` file found in current directory.
Conventionally, `bosh-deployments.yml` should be saved in /Users/andy.
Is /Users/andy/micro-deployment a directory where you can save state? (type 'yes' to continue): yes
Deploying new micro BOSH instance `manifest.yml' to `https://192.168.0.30:25555' (type 'yes' to continue): yes
Verifying stemcell...
File exists and readable                                     OK
Verifying tarball...
Read tarball                                                 OK
Manifest exists                                              OK
Stemcell image file                                          OK
Stemcell properties                                          OK
Stemcell info
-------------
Name:    bosh-vsphere-esxi-ubuntu-trusty-go_agent
Version: 2969
  Started deploy micro bosh
  Started deploy micro bosh > Unpacking stemcell. Done (00:00:07)
  Started deploy micro bosh > Uploading stemcellat depth 0 - 20: unable to get local issuer certificate
at depth 1 - 19: self signed certificate in certificate chain
. Done (00:22:07)
  Started deploy micro bosh > Creating VM from sc-54cdca8c-6b13-4c3d-ae48-5bc57d9b93ffat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:21:46)
  Started deploy micro bosh > Waiting for the agentat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:02:23)
  Started deploy micro bosh > Updating persistent disk
  Started deploy micro bosh > Create disk. Done (00:00:02)at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
  Started deploy micro bosh > Mount diskat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:01:35)
     Done deploy micro bosh > Updating persistent disk (00:01:50)
  Started deploy micro bosh > Stopping agent services. Done (00:00:01)
  Started deploy micro bosh > Applying micro BOSH specat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:06:16)
  Started deploy micro bosh > Starting agent services. Done (00:00:00)
  Started deploy micro bosh > Waiting for the directorat depth 0 - 20: unable to get local issuer certificate
. Done (00:01:14)
     Done deploy micro bosh (00:55:44)
Deployed `manifest.yml' to `https://192.168.0.30:25555', took 00:55:44 to complete
at depth 0 - 20: unable to get local issuer certificate

After that you can check the deployment is valid by: 

bosh target 192.168.0.30

You should be prompted to login; in my case (because of my yml not defining anything) the default credentials of admin / admin were valid.

Next step, deploying BOSH.


Umbraco 7 on Azure Website

This weekend I decided to finally get around to moving two static websites from being a Website MVC Project in Visual Studio to something that the wife could look after - so I set about rebuilding them in Umbraco.

Considering it's been a while since I used Umbraco, I was tempted to download it and create a new project, throwing it in via Nuget. But then I noticed it was on the Azure Website Gallery. A few clicks later, a website is working (New, Website, From Gallery, Umbraco), and even running on a free Web SQL Database (20 Mb limit) -- however, I do wonder how this is going to pan out, as the Web tier is due to be retired this September. 

Once there was a basic instance running, the next thing is to obviously starting building out the templates and document types - and creating the sites (I run both sites off a single instance of Umbraco, and just use the Hostname mapping feature it has).
The problem appeared when it came to FTPing to the instance - I keep forgetting that the nodes (in this case running in a cluster) take a while to synchronise - so if you edit you need to wait ... Or, as I found out, it's easier to enable Monaco (Visual Studio Web editor) that works with Azure - that way you can do the edits, hit save and its instant.

Finally, a quick upgrade by grabbing 7.2.4 from the Umbraco website and replacing bin, Umbraco, Umbraco_Client and the Views\Partials\Grid folder and I was done.

My only grip? The Azure installer doesn't give any indication, nor method, for changing the configuration thats held in the Config\umbracoSettings.config file - some of which (like mailserver) users probably want to alter easily without messing with FTP.


BlogEngine.NET and MySQL Site Map Provider (.NET Connector)

I just encountered an error that had me stumped for a short while - I installed the .NET MySQL Connector onto one of my servers and suddenly my installation of BlogEngine.Net ceased to work - I was unable to login to the admin, and just encountered the yellow screen of death.

After a little digging, I identified that the MySQL Connector had modified the machine.config, and added it's Site Map Provider into the list. As this wasn't configured, it was throwing an exception in the BlogEngine.NET code...

The fix? Simply adding <clear /> to the siteMap Providers list within the system.web block in the root web.config and the site sprang back into life!

Octopus Deploy and Proget

This weekend I switched my local Octopus Deploy server to use the ProGet as the package repository.

Generally speaking, it was pretty painless switch - but I was getting errors until I added an advanced MSBuild Argument (/p:OctoPackPublishApiKey=BuildKey) on to provide an API Key; obviously you need to configure this in ProGet :)
Initially I didn't think I would need to provide this API Key as the user the build agent was running under a user account that had full access to the feeds; it seems, however, that this is not the case when you are running normal authentication (i.e. not domain joined).

TFS 2013 Update 2 - Problems with oi Pages (TF400898)

I've been experimenting testing TFS 2013 Update 2 over the last few days, and encountered a couple of issues.
The first appears to be a problem that is specific to a Team Project Collection where the Activity Log (accessed via _oi) does not render correctly (progressing this on with MS Support), however, the second one is slightly more interesting.

If you go into the /_oi interface, select Job Monitoring, pick a job, you go to the detail page. Only now you also get something unfriendly.


Confirmed this on a clean install of Server 2012 R2, with TFS 2013 Update 2, so it seems that this is a breaking "change".
Hopefully a hot fix comes out soon for this one (and maybe the Activity Log issue).

Update: Just as I'm posting this, I get a call back for MS Support.  Both issues confirmed as bugs, and will be fixed in a future release. Issues that you might encounter with the Activity Log should automatically clear up after a month or two as old entries are purged - so if you encounter a TF400898 here you will probably have to put up with it for a while! 

Trouble with Apple's System Preferences for Printing?

Having trouble with your printer under OS X ? Has it changed IP or DNS name and now you are faced with having to delete it and re-create it?

Don’t worry, you can skip the rather limited Apple GUI, and simply visit http://localhost:631 on your Mac.

If this is your first time you will encounter an error, saying you need to enable pictures of abo the web interface; to do this, open Terminal and type in: 

sudo cupsctl WebInterface=yes

You’ll be prompted for your password.

Then click Administration, select Manage Printers, then select the Printer. Then make the changes you need; you'll be prompted for your username and password again.

Now, in my case the problem was AirPrint, so I simply changed the connection to ipp over http, and its fixed!

Poor password policies put users at risk

It's not exactly new, and something that almost every Web Developer knows ... or at least should do.

Poor password policies put users data at risk; and the larger (or more high profile) the product, the more of a target you become. Especially if you are potentially storing anything that is profitable for a crook.

So why do so many large, high profile, websites have poor password policies? It's not technical thats for sure. It can only be laziness or poor standards (or pressure) by the development teams behind them.

After the recent breach that impacted over 2000 Tesco Customers (a breach that Tesco are still saying was on OTHER websites, not theirs - a breach that ended up with my own account being locked (even thought it was not on the breached data list that was published on PasteBin) - a point that I still have not had a clear, satisfactory response from Tesco about), I decided that I'd work through a few of the websites I use and see what their password policies were like.

Many were pretty good - allowing you to use long, complex passwords. However, there were some interesting "issues" that I found:

British Gas

Required an alphanumeric password (no symbols), with a length of between 8 and 20.
Oh wait, no it's not - try and use a 20 character password and it fails saying it needs to be 16 max. Poor design / UAT work here.

HMRC

Required an alphanumeric password (no symbols), with a length of 8 to 12.
Rather worryingly, the passwords are case insensitive.

Confused.Com

Required an alphanumeric password (no symbols) with a length of 6 to 20.

O2

Required a password of between 7 and 16 characters.
Limited symbol set accepted.

Tesco

Password length of 6 to 10 characters (what on earth?!)
Reported as being case insensitive but didn't test this here.
Alpha numeric only - no symbols.

 

As you can see, Tesco is by far the worst offender that I've encountered on my short wander around on the internet - but I'm absolutely amazed about the HMRC's policy - considering what they secure, that is awful.

There is no excuse for poor password management / policies, I just wish it didn't take people's information being leaked into the public domain before companies start to pay attention.

 

If I get time, I'll have a look at some other sites - in the meantime, I'd strongly recommend people practice good password management and use a different password on each website (there are tools such as KeePass and LastPass to help you keep a note of them - securely - so you don't really have an excuse).

I'd also recommend plugging your email address into https://haveibeenpwned.com/ - a great service by Troy Hunt.

Updating Assembly Versions During TFS Builds

An article of mine has been published on CodeProject - http://www.codeproject.com/Articles/705482/Updating-Assembly-Versions-During-TFS-Builds.

In this article I explain how to modify the AzureContinuousDeployment workflow so that your hosted builds version stamp (i.e. update things like the revision number in the build number to correctly reflect your changeset number); however, this approach can easily be adapted to fit an on-premise TFS installation.

Using the AzureContinuousDelivery Build Process Template on your own server

Wanting to deploy to Azure using Continuous Delivery but not use the Visual Studio Hosted Build servers?

No problem; but you need to install the right version of things first!

I installed:

 

I then installed a couple of the assemblies from the Azure SDK Lib's into the GAC; these get installed into C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\v2.0\ref, and I installed:
Microsoft.ServiceBus
Microsoft.WindowsAzure.Storage
Microsoft.WindowsAzure.Configuration

If you don't do this, you get an error during the deployment portion of the workflow.

If you try it, let me know how you get on!

Visual Studio 2013 - Feedback tool

Those of you that use Visual Studio in a medium to large company that's worried about data security need to consider the addition of the Feedback tool that Microsoft are bundling with Visual Studio.

 

While I do applaud Microsoft for apparantly wanting to engage more with people who actually use their product, I have to worry about this feature.

 

Why?

 

Simple. It takes screenshots - and not just limited to the Visual Studio windows which would be bad enough in some circumstances, but all your desktops.

 

 

The good news is you can disable it pretty easily. The bad news is you'll have to ensure a registry key deletion occurs every time a user logs on to your network, as I've seen it occasionally reappear. Nightmare; no easy way to remove it centrally, block it or otherwise censor. Not ideal for an enterprise.

How do you drop this item from the menu?

Delete this key:

HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\12.0_Config\MainWindowFrameControls\{F66FBC48-9AE4-41DC-B1AF-0D64F0F54A07}