.NET Exception Handling-The right and the wrong

Over the years I’ve seen both ends of the spectrum when it comes to handling unexpected errors in code. Having over zealous exception handling in an application is just as bad (or worse – as it hides the problem!) than having none. Junior developers tend to go to the extreme, capturing and suppressing everything as they’ve had it drummed into them that code must be “stable” and never “fail”. Well, I guess it depends on what definition you put on stable and failure.  Personally, I would rather have an application that behaved, did what it was told and didn’t trash my data when something unexpected happens – I’ve seen lots of commercial apps, especially Financial related, that trash your data when they crash – not exactly fun when you then spend the next three weeks working out what’s missing since the last backup.

There is an excellent article that came out in 2005 on Exception Handling Best Practices in .NET (http://www.codeproject.com/KB/architecture/exceptionbestpractices.aspx), however, I have to admit, I don’t always follow them perfectly.

Personally, I try and follow these rules:

Never swallow an exception
If an exception has occurred, its because something isn't right. At the very least, log the FULL details (so that includes the stack trace) centrally. You do have centralised logging, right?
This is actually probably one of the most contentious issues among developers – some say you should never grab exceptions and only log them, instead you should let it bubble up to a higher layer to be handled. My feelings are that if the exception can be safely handled, without causing risk to user data, then its safe to handle it – as long as its logged!

Never re-throw an exception
You’ve caught an exception. You want to pass it further back up the stack. So don't use new Exception();, or throw ex; as this will hide the original exceptions call stack and message / innerexception information. Simply throw it again (i.e. throw; ).

If an object implements IDisposible, use using
There is always a reason why an object implements IDisposible, and if it does, be sure to use the using keyword to ensure all resources are released when the objects done.

If you are expecting an exception, only catch that specific exception
Ok, I know that sound strange, but bear with me! If you are handling conversion from GUI controls to, say, decimal values, then the odds on you are going to hit exceptions – and usually conversion releated ones. Then instead of trapping Exception, you should be trapping specific exceptions such as ArgumentNullException, FormatException or OverflowException.

If all else has failed, record the detail before you exit
Too many applications these days just throw up the generic “something knackered” .NET exception dialog and then exit. What do you do next? Start it up again. And what if it dies again? How are you supposed to report the problem? If you are building an application, make use of the Application.ThreadException and AppDomain.UnhandledException (note that latter is really AppDomain.CurrentDomain.UnhandledException, and you will need to hook this on each AppDomain, obviously) – record as much information as you can about the exception, and THEN terminate.
You can even go one step further and have the application automatically report the fault – I tend to do this for any programs that are being used internally in an organisation, and I know that they will always have access to reporting APIs (or SQL Servers).

When you compare .NET to my early days developing in Delphi, there is a stark contrast to the behaviour you adopt with Exceptions.  In Delphi, you used Exceptions a lot to control your program flow (you threw exceptions pretty much when anything went even a little wrong). These days, things are little more restrained – exceptions are heading to the realm of reserved for cataclysmic events that will cause the application to no longer be reliable.

Ultimately, no matter what happens when an exception is trigger, it’s down to the developer to take a reasoned approach to what to do next. Is it possible to continue? Or more importantly, is it possible to continue with no risk to stability, user data or other systems? If in doubt, exit!

I’ve personally built a series of frameworks that allow me to handle exceptions, logging and reporting very effectively, however, many developers are not in a position to do that (especially if you are working in a very small team, or are perhaps an independent). Or maybe you’d just rather use a product to help you. If that's the case, I can heartily recommend checking out Exceptioneer.

Cisco - Introducing Cisco Configuration Professional (aka SDM 2.0)

Carrying on from my earlier posting about SDM (Security Device Manager) I’d like to introduce you to Cisco Configuration Professional – also available from Cisco CCO.

In a nutshell, Cisco Configuration Pro is basically SDM 2.0. A lot of the screens incorporated within it are plainly the old SDM screens, although it does fix a number of the “new” issues that you encounter with Windows 7 and Vista. And at least it’s supported these days, unlike SDM.

When you first fire it up you are greeted with a nice, new, clean feeling UI. This quickly passes when you see the actual configuration screens!



My biggest gripe is that when you fire it up, and provide the details for the router(s), you still have to mess around and hit “Discover” in order to get it to actually interrogate the devices – but at least this now occurs in the background.

Some things of note tho: CCP can actually receive events from IPS modules so you can get alerts – which is cute – as are the extended port / protocol monitoring screens.

It also runs a local web server if you install the software onto a PC, to host and operate the (still Java) app.

At least it works on Windows 7 tho. Sigh. Can’t wait for a proper app tho.

Cisco SDM - Installing onto the router

If you are new to Cisco routers, and especially the SOHO range, Security Device Manager (or SDM) can be an absolute god send – especially if you are only used to working with routers via their web interface.

If you pickup a SOHO Cisco router you can find if SDM is installed by simply pointing your web browser at it’s IP Address. But be sure to check both http and https, as you can easily configure them to only respond on https (which is obviously the more ideal situation).

However, if SDM is NOT installed, and you want to install it, here are the steps to carry this out – note that you need the SDM installer first, which is available from Cisco CCO. At the present time the latest version is 2.5, which is actually rather ancient but still does the job.

So, to work.

Start off by unpacking the zip file to a decent folder on your computer – somewhere temporary is fine.
Open up the folder and run setup.exe
You might get a warning about not having JRE installed – you will need this, so if you haven’t got it installed follow the prompts and install it.
Follow the wizard through until you are asked where to install SDM. You have three options.
This Computer, Cisco Router, and Both.
If you install on “This Computer” or “Both”, the installer will unpack SDM onto the computer itself – and if you select Both or Cisco Router, it will be installed onto the router – so you can access it anywhere (that you have JRE installed) through your web browser by pointing it at your router. This can save you some hassle later, but needs space on the router – if you have the memory upgrades installed, this shouldn’t be a problem however. For this example, I’m installing on the “Cisco Router”.


The next prompt you will have is for the IP Address and login details of your Cisco. Note that your users will need to be Privilege level 15 in order to carry out this install.
You will be prompted to select the suitable modules to install – I would select Typical as this will get the installer to inly install what your router is capable of (based on your IOS install).
… and after a while, the install is done Smile
To confirm simply fire up your browser and point it at your router …


Now, once you have gotten used to your router, it’s time to start exploring the command line – for that, you want putty!


I’ve had a lot of problems with SDM on some newer machines – because of this I keep an XP Virtual Machine handy – with plain old XP, running IE 6, JRE 1.4.2_19 – these work, and don’t seem to cause any problems …
You can get the old version of JRE here.

Cloud Platforms and Microsoft Licencing

Over the past few weeks I’ve been conducting licence reviews on behalf of a client. An one part of the review revolved around Cloud platforms.

This lead me down an interesting, and truly unexpected, path of discover – and uncovered so many mistruths, unknowns and some seriously concerned staff training issues…

Now, I’m no Microsoft Licencing expert, but I can safely say that I know more about it that some other people in the industry, and I know my way around the various Microsoft schemes.

The problems all develop when you start looking at Cloud platforms – specifically Amazon EC and Microsoft Azure – when you are looking to run Windows nodes (either a Windows AMI in EC, or a Windows VM Role in Azure).  Neither of these options supports running a Microsoft Windows Server Web edition, so in essence, you have to tread carefully with regards to licencing, as there are requirements with these editions.

Both Amazon EC and Microsoft Azure bundle the licence cost of the Windows Server into the Instance Hour cost. So sorted. Or are you.

As I’m sure any readers in the industry will be aware, there are Client Access Licence (CAL) requirements to operate a Windows Server on your own hardware – for both users making use of services on the box (even if they are consuming them remotely, via a website) or, in a rather nicely worded paragraph in Microsoft’s EULA, are not anonymous when making use of any websites or webservices hosted on said server.

The curious thing is, there is no clarification on how this authentication or identification has to be carried out. The immediate assumption would be that users need to be authenticated by Active Directory accounts. But what about authenticating against a customer SQL Table? Or a XML file? Surely in these cases a user is no longer anonymous, and therefore you are required to have a CAL per user?  Obviously this would have a massive impact on services hosted without clouds – as most of them have authentication, and as such know who you are. Yet they are not purchasing additional CALs (which, by chance, would make any service hosted prohibitively expensive. This is after all why Microsoft brought out Microsoft Windows Server Web edition).

I pitched this question to Amazon’s support guys. Who referred me to their sales team. Who after four attempts to elicit a response finally got back to me. With one of the most confused, and noncommittal responses I’ve ever seen.

Now, ultimately, this could be a really big deal to services that are using Cloud platforms, and assuming they are fully licence compliant. In order to try and get clarification (and ultimately, wrap this up for my client!), I’ve ended up contacting Microsoft to give me the final verdict. But I’ve yet to hear.

I hope things are not as dire as they seem to be …

Social Privacy - So who is connect.me?

Over the last 24 hrs I’ve been watching the growing debate around a new website that has appeared called connect.me.  It has very little information on it, and had even less on it yesterday (no privacy policy).

Sophos’ security blog, Naked Security, picked up on it here – and highlighted the madness of some people who are registering with a service that does not say what it actually does. Especially when you need to hand over the keys to your LinkedIn, Twitter or Facebook account to them to get in. Surely these people have seen what can happen when any of these accounts get compromised?

I’m pleased that Sophos actually had a response from the people behind connect.me, but it still doesn’t exactly fill me with confidence. This feels decidedly dodgy to me. Anything that doesn’t explain exactly what they are offering before I have to register, or give me a way to register with some other details is a big no-no for me.

What really shocks me a little is actually the people that are registering for this service. I have seen a number of exceptionally technical people fire off the automated tweet saying they have “reserved their username” on connect.me. And some of these people should really know better than to trust an unknown entity with their identity. (Hey, that rhymes!)

Needless to say, I’ll be keeping well clear until their intentions are well known.

UPDATE: I’ve just come across this article on Mashable. To me, it feels like they are trying to justify the approach connect.me has taken, by arguing that they are in “Startup Stealth Mode”. Well, if that is the case, why would they post it on Facebook in the first place, and why on earth would you have a viral hook in there to hit twitter etc when people signed up? Does not seem terribly stealthy to me. I have to say, I’m still not comfortable with their approach – it’s one thing to collect email addresses, its another to collect social media details.

The Internet, Social Networks, and Privacy

I’ve said it before, and I’m saying it again. Nothing on the internet is temporary, and nothing is private.

And yet people really do seem to expect things that they say on social networks, such as Facebook, to remain private. There have been articles after articles on people complaining that they have been skipped over for promotions, not been offered a new job or even lost their jobs due to things they have said online. And yet they wonder why?

Facebook is a great example. Back in the day, when they obviously had not thought about any privacy concerns, everything was open. Things improved after complaints, and the media spotlight was brought to bear, and information became restricted to friends of friends only. Now you can actively control where your information goes (well, more or less) – and there are warnings when you add applications that it will have access to your information. And yet people still add them, and even the rogue data collection spam ones too. Why? Is it the social network’s fault? Or is it user education? Or is it both? Or even, something else entirely?

I get tired of trying to tell people that what they post on the internet will not disappear (for example, my very first company died out a long time ago … and yet the Way Back machine can STILL dig up the website cache for you to view!). You delete things on Facebook, and you think they are gone. Then just go and try their “download user data” option and check it out. Nope, your information is still there. All those messages that you thought were deleted … are there. Hope you didn’t say anything incriminating!

I’m not advocating a police state style managed internet, but I’m advocating user education – and sensible web app construction. Privacy and Security should NOT be an after thought, but should be deeply ingrained in your design and architecture.

And people really do need to stand up and take any aftermath of things they say. Free speech is still alive yes, however, remember to put your brain into gear before you mouth. There are too many people out there that can probably read your social network page, blog or newsgroup posting to just rant off about something – especially if it’s something about a company or person that you would not want to say to their face!

And, please people, stop adding those damn rogue apps on Facebook. They drive me nuts.

If only users would actually read some of the articles on security blogs, such as Naked Security by Sophos, we might have a slightly safer digital world. But then again, that would assume that people actually understand their personal digital security …

There has to be a better solution to this. Maybe Apple have the right idea with their App Store after all – trying to prevent rogue app introduction by vetting every submission …

The road to IPV6..

What with all the news about IPV4 addresses running out on the public internet, it got me thinking about my internal setup.

Ok, first off, I really don’t need to run IPV6 internally at the office or at home – we don’t have that many devices lurking on our networks, but it would be an interesting test case for our network linked products.

However, we have been thwarted.

Our Cisco router happily will work on IPV6 after having latest IOS images loaded.

But our network printers will not. Bugger.

Ironically, both printers are HP and one isn’t even that old.

Come on HP. Wake up and smell IPV6.

Clicky Dashboard; a desktop window on your Clicky Web Analytics

On the first of February my company released a small, very simple, companion application for use alongside Clicky Web Analytics.  The app in question is called Clicky Dashboard (yeah, I know, its original Smile).

What does it do?

It simply pulls information down hourly and presents the pertinent parts of your Web Analytics on a desktop client – letting you keep working while still keeping an eye on your website performance.

At the moment, the application is extremely basic as it was never really intended to be released – it was built for internal use; however, we thought that there might be a need for the application by others.  Instead of pushing forward developing it in various directions that might not actually make sense, we though that we would open it up “as is” and get YOUR feedback. What would YOU want it to do. What statistics do YOU want it to display.

So why not download it, have a play (with the fully functional 30 day trial version) and give us your feedback?

For details on Clicky Dashboard, please click here

The ZZR Rebuild project!

I noticed this morning that the last posting I had on here was back when I fist got my ZZR; that is a while ago now! Since then I even bought a Honda Hornet (good fun), and sold the Hornet. I do, however, still have the ZZR but she has been laid up due to problems with the valve clearances.

Turns out that after doing some shim work on the ZZR, it showed up a problem of extreme coke build up which was preventing them from closing properly – and therefore, no compression. Ho hum. Anyway, cylinder head’s been sorted, and things are slowly getting there – trying to fund it all while starting a business is proving awkward, but thanks to a good friend of mine doing the work it’s a lot easier than it would have been! Kicker at the minute is that while pulling it all apart, we found that the cam chain has stretched and needs replaced. That and a full gasket kit, and the engine should be ready to put back together.

Oh, she was also resprayed to black too!

Here’s some pics to keep you amused for now.


TFS 2010: Using a 'hard lock' approach to source control

Just in case anyone is a TFS 2010 user, and they don’t like the multiple checkout support that it has (i.e. more than one person can edit the same file, and they just have to resolve conflicts on check in) then you can change it to use the old-school approach of full locking on checkout – this means that only one person can edit a file, or even, hold a lock on it. If anyone else wants to edit that file, they are blocked until you check in your changes.

Why would you do this? Well, pretty much the only reason is if you have developers on your team who don’t understand “current” source control and you keep loosing changes when people check in.

So, how do you change this option? It’s easy enough, but you will need permissions on the Team Project to change it – if you are unsure, speak to your TFS Administrator.
To get to the option itself, first go into Visual Studio, to the Team Explorer and then right click on the project you want to change the option for. Then select Team Project Settings, then select Source Control.
In the dialog that appears, untick “Enable multiple check-out” and hit ok.

Job done.