Hosting, Virtual Hosting, Dedicated Servers .. what's the diff?

I’ve been going through the infrequent hassle of moving my hosting provider.

I previously had a dedicated server (a very nice, very powerful box but I doubt anyone is interested in that ;)) with a company called UK Solutions down in Redditch, but recently decided that as I no longer needed the flexibility of my own virtualised system (and of course, want less hassle to devote to my other budding projects!) that I would migrate everything and terminate the co-lo agreement.

And so the fun began.

I started off a Reseller account with (basically an account that lets me throw all the sites etc up and let someone else deal with the hardware and management – much nicer for me these days!). As they are using Plesk things were a little fiddly to start with (and I still don’t get the icons loading, but hey, it works), but once I got the hang of things it all went smooth as anything. I even get daily reporting straight to my inbox :) And customer control panels. Dear god, that's scary … I can get them to do their own email setup and leave me in peace. Next thing you know, I’ll be charging for hosting. Oh look, flying pig.

I fired off my contract cancellation into UK Solutions with a good couple of months left on my current term, and was rather dismayed to see that they added another 5 weeks onto the term to bring it up to a full quarter period. Seems that there is a 3 month exit clause on the contract. Argh. You would have thought that termination of contract, when you have a decent period left anyway, would let them have a little bit of flexibility on this clause especially when I’d been a customer of theirs for three years without any hassles.

Two points for signing off this minor rant :)

1. Check your contracts … always read the small print

2. You don’t ALWAYS get what you pay for. are a LOT cheaper than UK Solutions, and so far I have absolutely no complaints – even when logging fault reports, things get resolved swiftly. Keep up the good work guys.

Enterprise level Hyper-V - Experiences

Over the past few weeks, us geeks at Money Dashboard have been hard at work building our production environments, and as I am sure you can imagine we hit a few issues.

In order to help anyone else out who might be thinking about, or even is deploying a fairly complex environment around Hyper-V I thought I would share our findings.

First off, as I’m sure you can all appreciate, I am unable to go into any real specific details on our implementation, so some of this information maybe be a little strange, or difficult to follow – bear with me and hopefully if you ever find yourself in a similar situation it might just help you out!

iSCSI and Virtual Machines

If, like us, you are virtualising your SQL Server’s, then do not forget that you will need to bring some iSCSI (or whatever storage system) mounts through to the VM’s.  This on the surface does not pose a problem, but we DID hit problems when pulling iSCSI through. As you can imagine, we are running Jumbo Frames (MTU 9000) on our iSCSI network in order to optimise throughput, but the default VM adapters only support standard packet sizes (i.e. MTU 1500). In order to get around this, you need to use the Synthetic Network Adapter in the Hyper-V VM, and be sure to set the properties to enable Jumbo Frames. You must also have the physical nic on the server set for Jumbo Frames too. Always worth checking with the following command:

netsh interface ipv4 show subinterface

You may, like us, then notice some packet loss on the iSCSI Adapter. In our case this turned out to be something strange going on with the way the Synthetic adapters were behaving with our Broadcom nics (BCM5709 in case anyone is interested!). Disabling all the offload components (TCP, iSCSI and Checksum) fixed the problem, but we still do not know exactly why this was occurring …

QoS …. do NOT forget it

Make sure you split your Live Migration and Heartbeat traffic onto separate networks, and oh most certainly remember to apply those Quality of Service rules on the switchgear.

We forgot, and as soon as Live Migration kicked off, the complete Hyper-V cluster went mental … it thought that all the other nodes had failed, so EVERY node went to start EVERY virtual machine. As you can no doubt imagine, absolute chaos ensued, and the virtual machine disks were corrupted (one catch with using the new Cluster Shared Volumes it seems).

Dell EqualLogic, and BACS

We are lucky enough to be using pretty much all Dell kit, including the Dell EqualLogic PS6000XV as our SAN. One snag we did hit is that you really do not want to route your iSCSI traffic over a virtual adapter created through the Broadcom BACS suite … on the surface it will appear to work, but when you start looking at it carefully you will notice that there appears to be a significant bandwidth limitation creeping in somewhere. Not sure if it was the BACS drivers, or the Dell MPIO driver, but it disappeared when we reverted to use proper physical NICS. Equally, do not forget to install the Dell MPIO drivers into any virtual machines that are using the iSCSI :)

Adding Hyper-V Clusters in Microsoft System Center Virtual Machine Manager

When you are finally ready to add your machines into SCVMM, add the cluster name – not the individual machine names. It seems that if you add the individual machines, SCVMM does NOT treat them as an HA cluster. I haven’t found any logical way of merging multiple machines into a cluster in SCVMM, or any real reason why it doesn't prevent you from adding the individual nodes anyway (it can see they are in a HA cluster configuration after all).

Summary of the kit we used:

Dell EqualLogic PS6000XV SAN
Dell PowerConnect 6248 gigabit switch stack
Dell R200 1U Rack Mount Servers
Dell M1000e Blade Centre

An awful lot of cabling.

Microsoft Windows Server 2008 R2 Enterprise (Both full and Core)
Microsoft Windows Server 2008 R2 Standard (Both full and Core)
Microsoft System Center Virtual Manager Manager 2008 R2
Microsoft System Center Operations Manager 2007 R2

And a ton of custom scripts.

Thanks go to Dave Veitch from Company Net for assisting in the configuration!

Code Regions - Their use and misuse

Over the last few evenings I have been reviewing code done by a fellow developer, and have come to a conclusion.

The .NET code regions (aka Collapse Regions, Collapsible blocks and dear god knows what else in other languages) can really be misused.

Correct use of regions can group common sections of code – for example code that all interacts with the same property (you might have the property itself and also a number of private functions that work with or derive from the property), for improving readability (yes, we all know that we should not have units / classes that size but we all have right?) and generally for being neat (it is cool seeing it all shrink down isn't it?). BUT, likewise you can seriously go overboard.

This code in question has many levels of nest regions, and although it certainly does tidy the code up, it certainly does not help it’s maintainability.

I’m sure I’m not alone in the fact that I typically use a lot of keyboard shortcuts when I’m developing, but adding many many many regions, with many layers of nesting, this makes keyboard shortcuts for the region collapse / expand pretty useless. No matter what you do, you always seem to end up with too much or too little code on screen – either way, it makes development difficult.

Which brings onto a final point … if this is the case (i.e. the code is damn near impossible to understand at a glance due to the structuring in place), doesn’t this qualify as un-maintainable code?  Perhaps we should be setting up rules on TFS and the like to detect the layers of nested regions, and flag them for review when it sees someone going to excess?

Comments, as always, welcomed :)