Log4Net and Splunk

Splunk is one of the most impressive "On Premises" log aggregation tools that I have ever come across. Being able to bring a large number of disperate data sources together into one combined index is truly useful in a modern Ops environment.

One of the things I find helpful from a development approach is consistent logging - and too often this is something that development teams overlook until things break.

However, getting data from a .NET / C# application into Splunk is not difficult and so these days I try and log absolutely everything (well, come on, the free tier gives you a decent chunk of an allowance too!).

The first thing I do is to create a new Index in Splunk - you do this by selecting Settings, Indexes and then clicking New.
The only box you need to fill in is the index name - let everything else default on your installation.

Once you have the index created, we need to setup the input. Settings then Data Inputs will take you to the right screen. Click Add New next to UDP. Pop in an unused port, say 8081, then click Next.  Make sure you select your index you created earlier, and specify the type as Generic Single Line - this basically tells Splunk it's unformatted data and not to pre-parsed it.

The next thing you need to do is actually get your code to submit data to Splunk -- the easiest way that I have found to do is to use Log4Net; in Visual Studio, install the log4net Nuget Package and this will take care of creating the relevant config entries. If, like me, you prefer to put your logging code into a common assembly then reference it elsewhere, remember to copy the assembly redirects and log4net specific entries into your other configs (or things just don't work!).

In your code, you will probably have a common class for sending log data - something like:

using log4net;
namespace YourApp.Common
    public static class Logging
        /// <summary>
        /// Application or Class that should be identified with the log statement that is passed
        /// </summary>
        public static string Application { get; set; }
        /// <summary>
        /// Initialise logging - must be called at application start
        /// </summary>
        public static void Initialise()
        /// <summary>
        ///  Log an information message
        /// </summary>
        /// <param name="message"></param>
        public static void Info(string message)
            ILog logger = LogManager.GetLogger(Application);

That way you can specify the application name to be passed through with the logging data (handy for Splunk, as you can throw everything into one Index and then break out specifically what you need later) - and use the class from pretty much anywhere.

In your web.config you need it to look like:

<?xml version="1.0" encoding="utf-8" ?>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
    <appender name="UdpAppender" type="log4net.Appender.UdpAppender">
      <param name="RemoteAddress" value="splunk-server" />
      <param name="RemotePort" value="8081" />

      <layout type="log4net.Layout.PatternLayout" value="%level - %date{MM/dd HH:mm:ss} - %c - %stacktrace{2} - %message" />

    </appender>     <root>       <level value="ALL" />

      <appender-ref ref="UdpAppender" />

Finally, call away to get your data logged:

And that, folks, is it - you can now push .NET C# app log data into Splunk.

A couple of points that some people might question me on:

Why use UDP Appender and not TCP?

UDP is a lossy transmission protocol, and it is entirely possible that log messages do not make it into the Splunk indexer; however, it is significantly lighter weight than establishing TCP/IP connections.

Can I log to multiple locations - such as Splunk but also a text file?

Yes - add another Log Appender; the Log4Net docs are pretty good on this one. 

Is there much point about having the date time in the log message?

That depends - if you are worried that the messages might get cached somewhere and not always trust the date / time that Splunk adds to it's indexed entries, then you probably want to keep it. Otherwise feel free to drop it from the pattern.

EE / Apple Wifi Calling

I've moved house, and the EE signal sucks. "No problem", I thought, "EE had enabled Wifi Calling a few days earlier - I'll give it a shot".

It works generally ok - but only on my Wife's iPhone, and not mine. Seems that EE have only enabled it on Personal contracts and not the Corporate contracts. They have, however, pushed out the carrier profile update so you see the option - although it does absolutely nothing but tease!

The one gripe that I have, other than not being able to use it, is that whenever the phone sees a tiny bit of network signal it tries to switch from Wifi - which means you drop the call. This happens ware more than I'd put up with generally and the only way round it that I've found is to enable Airplane mode and then re-enable Wifi. Not the best user experience, but I guess this one is down to Apple's mistake!

Heres hoping that EE and Apple can resolve the glitches on it.

Deploying Cloud Foundry with vSphere - Part 2

Now, I decided to try a "light" installation of Cloud Foundry as this isnt going to be production ... normally it seems you deploy BOSH then you deploy Cloud Foundry from that, but light uses bosh micro.

Create a folder to hold your installations, then run

brew tap xoebus/homebrew-cloudfoundry

brew install spiff

git clone https://github.com/cloudfoundry/cf-release

git clone https://github.com/cloudfoundry/bosh-lite


Things will then kick off -- downloading th ebasic stemcell and pushing things onto the bosh micro director to perform.

Deploying Cloud Foundry with VMware vSphere - Part 1

Cloud Foundry is an interesting portal / management system from EMC's Pivotal Lab's team to allow you to (essentially) manage docker and the associated infrastructure it provides.

But one of the features I love the most is that it will directly integrate with VMware vSphere and take so much of the pain away.

However, the initial setup can be a bit ... awkward, so I thought I'd document how I got it working.

First off, I assume you have a fully working vSphere setup - so ESXi hosts configured and running, along with the vSphere Appliance or installation on a windows box somewhere. I tested this with vSphere 6.

Next, give the instructions a read: http://docs.cloudfoundry.org/deploying/vsphere/
Now, my private setup for testing was no where near the minimum specifications (I run a couple of HP MicroServers at home for testing things) but this didn't stop me continuing!

One thing to bear in mind, if you have multiple ESXi servers, make sure you have a datastore mounted on them both that can be shared ...

The first main step to deploying Cloud Foundry is deploying the MicroBosh image onto vSphere.
Pretty straight forward, but some jumping around is needed. Create a folder on your machine to hold things, and create the manifest.yml as per the documentation. Go through and replace the various items (such as IP address any thing in caps) thats need to be done to make it valid -- note that you need to use a vSphere Cluster and shared storage here -- I cheated, and had a cluster with a single node and an NFS Datastore that I'd created earlier.

After that, grab MicroBOSH off the web. Installing this onto my Mac OSX literally took opening a terminal and running the command, thanks to Yosemite already coming with a ruby install - but I did need to run it sudoed!

With Bosh on your machine, the next thing is a stemcell. This is basically the "starting point" for Cloud Foundry it seems.

So ...

Downloaded the stemcell with: 

bosh download public stemcell bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz

Preped the deployment with: 

bosh micro deployment manifest.yml

When I went to deploy using: 

bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 

I got an error: 

either of 'genisoimage' or 'mkisofs' commands must be present

Fixable using: 

brew install cdrtools 

(you need homebrew installed, google it).

I also encountered an issue with the stemcell I'd picked that wasn't actually provisioning the network ... initially I thought this was a bug with the process as no errors where given, but I then discovered (after a lot of googling) I had an error in my manifest.yml file. Seems it's really sensitive and there's pretty much no validation.

Full console output of provisioning micro bosh:

iMac:micro-deployment andy$ bosh micro deploy bosh-stemcell-2969-vsphere-esxi-ubuntu-trusty-go_agent.tgz 
No `bosh-deployments.yml` file found in current directory.
Conventionally, `bosh-deployments.yml` should be saved in /Users/andy.
Is /Users/andy/micro-deployment a directory where you can save state? (type 'yes' to continue): yes
Deploying new micro BOSH instance `manifest.yml' to `' (type 'yes' to continue): yes
Verifying stemcell...
File exists and readable                                     OK
Verifying tarball...
Read tarball                                                 OK
Manifest exists                                              OK
Stemcell image file                                          OK
Stemcell properties                                          OK
Stemcell info
Name:    bosh-vsphere-esxi-ubuntu-trusty-go_agent
Version: 2969
  Started deploy micro bosh
  Started deploy micro bosh > Unpacking stemcell. Done (00:00:07)
  Started deploy micro bosh > Uploading stemcellat depth 0 - 20: unable to get local issuer certificate
at depth 1 - 19: self signed certificate in certificate chain
. Done (00:22:07)
  Started deploy micro bosh > Creating VM from sc-54cdca8c-6b13-4c3d-ae48-5bc57d9b93ffat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:21:46)
  Started deploy micro bosh > Waiting for the agentat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:02:23)
  Started deploy micro bosh > Updating persistent disk
  Started deploy micro bosh > Create disk. Done (00:00:02)at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
  Started deploy micro bosh > Mount diskat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:01:35)
     Done deploy micro bosh > Updating persistent disk (00:01:50)
  Started deploy micro bosh > Stopping agent services. Done (00:00:01)
  Started deploy micro bosh > Applying micro BOSH specat depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
at depth 0 - 20: unable to get local issuer certificate
. Done (00:06:16)
  Started deploy micro bosh > Starting agent services. Done (00:00:00)
  Started deploy micro bosh > Waiting for the directorat depth 0 - 20: unable to get local issuer certificate
. Done (00:01:14)
     Done deploy micro bosh (00:55:44)
Deployed `manifest.yml' to `', took 00:55:44 to complete
at depth 0 - 20: unable to get local issuer certificate

After that you can check the deployment is valid by: 

bosh target

You should be prompted to login; in my case (because of my yml not defining anything) the default credentials of admin / admin were valid.

Next step, deploying BOSH.

Umbraco 7 on Azure Website

This weekend I decided to finally get around to moving two static websites from being a Website MVC Project in Visual Studio to something that the wife could look after - so I set about rebuilding them in Umbraco.

Considering it's been a while since I used Umbraco, I was tempted to download it and create a new project, throwing it in via Nuget. But then I noticed it was on the Azure Website Gallery. A few clicks later, a website is working (New, Website, From Gallery, Umbraco), and even running on a free Web SQL Database (20 Mb limit) -- however, I do wonder how this is going to pan out, as the Web tier is due to be retired this September. 

Once there was a basic instance running, the next thing is to obviously starting building out the templates and document types - and creating the sites (I run both sites off a single instance of Umbraco, and just use the Hostname mapping feature it has).
The problem appeared when it came to FTPing to the instance - I keep forgetting that the nodes (in this case running in a cluster) take a while to synchronise - so if you edit you need to wait ... Or, as I found out, it's easier to enable Monaco (Visual Studio Web editor) that works with Azure - that way you can do the edits, hit save and its instant.

Finally, a quick upgrade by grabbing 7.2.4 from the Umbraco website and replacing bin, Umbraco, Umbraco_Client and the Views\Partials\Grid folder and I was done.

My only grip? The Azure installer doesn't give any indication, nor method, for changing the configuration thats held in the Config\umbracoSettings.config file - some of which (like mailserver) users probably want to alter easily without messing with FTP.

BlogEngine.NET and MySQL Site Map Provider (.NET Connector)

I just encountered an error that had me stumped for a short while - I installed the .NET MySQL Connector onto one of my servers and suddenly my installation of BlogEngine.Net ceased to work - I was unable to login to the admin, and just encountered the yellow screen of death.

After a little digging, I identified that the MySQL Connector had modified the machine.config, and added it's Site Map Provider into the list. As this wasn't configured, it was throwing an exception in the BlogEngine.NET code...

The fix? Simply adding <clear /> to the siteMap Providers list within the system.web block in the root web.config and the site sprang back into life!

Octopus Deploy and Proget

This weekend I switched my local Octopus Deploy server to use the ProGet as the package repository.

Generally speaking, it was pretty painless switch - but I was getting errors until I added an advanced MSBuild Argument (/p:OctoPackPublishApiKey=BuildKey) on to provide an API Key; obviously you need to configure this in ProGet :)
Initially I didn't think I would need to provide this API Key as the user the build agent was running under a user account that had full access to the feeds; it seems, however, that this is not the case when you are running normal authentication (i.e. not domain joined).

TFS 2013 Update 2 - Problems with oi Pages (TF400898)

I've been experimenting testing TFS 2013 Update 2 over the last few days, and encountered a couple of issues.
The first appears to be a problem that is specific to a Team Project Collection where the Activity Log (accessed via _oi) does not render correctly (progressing this on with MS Support), however, the second one is slightly more interesting.

If you go into the /_oi interface, select Job Monitoring, pick a job, you go to the detail page. Only now you also get something unfriendly.

Confirmed this on a clean install of Server 2012 R2, with TFS 2013 Update 2, so it seems that this is a breaking "change".
Hopefully a hot fix comes out soon for this one (and maybe the Activity Log issue).

Update: Just as I'm posting this, I get a call back for MS Support.  Both issues confirmed as bugs, and will be fixed in a future release. Issues that you might encounter with the Activity Log should automatically clear up after a month or two as old entries are purged - so if you encounter a TF400898 here you will probably have to put up with it for a while! 

Trouble with Apple's System Preferences for Printing?

Having trouble with your printer under OS X ? Has it changed IP or DNS name and now you are faced with having to delete it and re-create it?

Don’t worry, you can skip the rather limited Apple GUI, and simply visit http://localhost:631 on your Mac.

If this is your first time you will encounter an error, saying you need to enable pictures of abo the web interface; to do this, open Terminal and type in: 

sudo cupsctl WebInterface=yes

You’ll be prompted for your password.

Then click Administration, select Manage Printers, then select the Printer. Then make the changes you need; you'll be prompted for your username and password again.

Now, in my case the problem was AirPrint, so I simply changed the connection to ipp over http, and its fixed!

Poor password policies put users at risk

It's not exactly new, and something that almost every Web Developer knows ... or at least should do.

Poor password policies put users data at risk; and the larger (or more high profile) the product, the more of a target you become. Especially if you are potentially storing anything that is profitable for a crook.

So why do so many large, high profile, websites have poor password policies? It's not technical thats for sure. It can only be laziness or poor standards (or pressure) by the development teams behind them.

After the recent breach that impacted over 2000 Tesco Customers (a breach that Tesco are still saying was on OTHER websites, not theirs - a breach that ended up with my own account being locked (even thought it was not on the breached data list that was published on PasteBin) - a point that I still have not had a clear, satisfactory response from Tesco about), I decided that I'd work through a few of the websites I use and see what their password policies were like.

Many were pretty good - allowing you to use long, complex passwords. However, there were some interesting "issues" that I found:

British Gas

Required an alphanumeric password (no symbols), with a length of between 8 and 20.
Oh wait, no it's not - try and use a 20 character password and it fails saying it needs to be 16 max. Poor design / UAT work here.


Required an alphanumeric password (no symbols), with a length of 8 to 12.
Rather worryingly, the passwords are case insensitive.


Required an alphanumeric password (no symbols) with a length of 6 to 20.


Required a password of between 7 and 16 characters.
Limited symbol set accepted.


Password length of 6 to 10 characters (what on earth?!)
Reported as being case insensitive but didn't test this here.
Alpha numeric only - no symbols.


As you can see, Tesco is by far the worst offender that I've encountered on my short wander around on the internet - but I'm absolutely amazed about the HMRC's policy - considering what they secure, that is awful.

There is no excuse for poor password management / policies, I just wish it didn't take people's information being leaked into the public domain before companies start to pay attention.


If I get time, I'll have a look at some other sites - in the meantime, I'd strongly recommend people practice good password management and use a different password on each website (there are tools such as KeePass and LastPass to help you keep a note of them - securely - so you don't really have an excuse).

I'd also recommend plugging your email address into https://haveibeenpwned.com/ - a great service by Troy Hunt.