BT to block Newzbin 2

It’s a fairly significant win for the anti-piracy groups … it’s certainly the first time that I’ve heard of an ISP being ordered, by a court, to restrict it’s user activities.

I’m talking about the news today that BT has been ordered to block it’s users from accessing the Newzbin 2 pirate search website.

I don’t advocate piracy, but I do have to think that there has to be some sense in what people say – if the software, music and films were available in digital formats and were cheaper, then surely the amount of piracy would be lower?

I tend to buy music on CD’s still. Why? Because I have many different digital devices that I want to play it on. So I buy the CD, and rip it to MP3. Solved. For ages it was impossible to buy any digital music without some form of DRM on it – and to date, no one has managed to come up with a GOOD DRM system that works fully across platform – from mobiles, to pcs, to macs, to in car players.

I think the same problem is now facing films. There is no one, easy to use, supported platform for playing films that you can download. Legally.

Perhaps if these problems were solved, piracy would reduce. Perhaps.

As it stands, I can predict the risk in people using things like Tor, and private VPNs.

Telerik JustCode

For a while now I have been using the Telerik JustCode Visual Studio extension tooling which provides extended Code analysis and Refactoring capabilities – something that I find the standard environment has significant faults in.

I actually started off using Resharper to extend Visual Studio, and was using it for many years before JustCode was around however I found myself switching due to a number of reasons:
- Poor performance
- JavaScript refactoring
- Extensions to Unit Tests (new unit test runner)

It also helps that the Telerik team are extremely receptive to wish list feedback, and in fact, seem to release builds just when I’m wanting new features Smile

So, what exactly is JustCode?

Well simply put its a plugin for Visual Studio 2005, 2008 and 2010.
What does it provide? Now that’s a fairly long list:
- Code Analysis
- Refactoring
- Unit Test extensions
- Code Templates
- Code Navigation and Search
- Code Assistance
- Code Generation
- Code Quick Fixes
- Code Cleaning and Formatting
- Code Quick Hints

I could go on forever and give you a full rundown on the functionality in detail, but I’m not going to. Instead, if you are interesting, why not read the Telerik JustCode page? It covers everything.

What are the benefits? Ultimately, I code quicker – and more efficiently. Common errors such as locating missing classes in assemblies that you do not reference (yet), removal of duplicate or unnecessary characters, automatically adding casts  and more.
The much improved error display is also a god send – and seems to be more accurate that the one in Visual Studio too (it’s actually part of the Code Analysis part of JustCode). It has support for C#, Javascript, ASPX and XAML – basically all the languages that a modern .NET developer has to be conversant in!

But, my absolute favourite has to be the extensive refactoring tooling; being able to move types to new class files, extracting methods to improve code reuse and automatically adding relevant stubs in descendants. Lots of little operations that I know we can all do manually, but why should we? Surely tooling such as this can only make us faster at producing quality code?

As it stands, JustCode is only one of a couple of tools that I always install in my Visual Studio environment to allow me to shape it to my needs – but more on the others some other time!

For now you homework is to go and check out JustCode, and see if it will help you code faster …

What's required for an efficient development team?

Running an efficient development team is not an easy task – but many people, especially managers, seem to think its just the same as running any group of people.

And so it is – but whomever is responsible for the dev team needs to be technical. They’ll be the one that has the responsibility (and authority) to make sweeping architectural changes on active projects when problems are encountered. If companies put non-technical people in these rolls, you rapidly end up in a cycle of meeting after meeting discussing things, with nothing ever being done.

But there is more to it that a good manager; many people in the industry will have heard of the Joel Test – this is a short, simple test that Joel Spolsky came up with back in 2000, but is still very relevant today. It highlights that there are key areas that will significantly impact on your dev team, and ultimately potentially change the outcome of any projects (poorly run dev teams burn more time “freewheeling” rather than working). Why not have a read of the FogCreek About page (that’s Joel’s company in case you were wondering) – now that’s a company that a developer would be happy working for!

I’ve been in and out of enough software companies while consulting to pickup on common issues, and it truly is startling how often the really basic points get missed – time and time again. So often, I’ve actually ended up with my own checklist of things to watch out for:

Planning, Scheduling and Change Management

Having a good understanding of what you are trying to do is … well … important. And yet so many (significant) development projects start with no regard to what is actually the aim.
At a minimum you should understand:
* The problem you are solving
* The target audience
* How they will use your package (Use cases!)
These in turn should lead to a functional and testing specification – without these, you can’t plan …
Then you should actually plan your releases; agile is good here, as you force yourself into short, manageable release cycles – and this in turn forces you to keep to small working units. As a rule of thumb, break your functions down into tasks, right down to the hours to implement level (this forces you to do the technical design work you see!), and record the information in your planning software. Be sure to allow time for planning, development, bugs etc.
Once your development cycle is underway, you shouldn’t change anything within it – any time you change what is planned in the active cycle, you risk completely blowing it out the water.
Every day should start with a quick (max 5 / 10 minutes) development meeting, involving ALL your development staff – at this point daily requirements are outlined (based on the current release plan – remember, no changes!), and potential problems are brought up – if there is anything that needs to be discussed in detail, this should be handled separately. This format allows for a good understanding by all of the big picture; without this information they might feel not only devalued, but also question what it is they are working towards.
Before the start of a development cycle, a planning meeting (or more than one!) is needed; normally this starts off being project stakeholders and technical lead only, but eventually includes the developers once a time estimating point is reached; it should always be the developers themselves that complete the actual time estimates.
At the end of the development phase, a comprehensive testing and signoff stage should begin; where requirements are checked and the project stakeholder is notified. At this point, release management takes over.

Source Control

Source Control really is a simple step. And I don’t mean a simple source control system; I mean something that can handle Branch and Merge operations – properly. No matter which way you run your dev team, you WILL end up with multiple copies of one project alive (release and development anyone?) – and far too many places work using “copies” of the code, and manually sync changes. Far too painful, so inefficient and so error prone!
Then we get onto the problem of single locking. If you run a very basic source system, with single locking in place you will possibly hit problems – especially if your developers have a tendency to not check files in regularly (but more on that later).
Under no circumstances should you EVER have a project where the code lives on a developers machine, and is not source controlled.

Automated Builds

Continuous integration and automated builds might not fit your project, but just how are you building your release editions now? I bet they are probably done by a developer, on a developers machine manually? So how can you be sure there are no strange dependency problems (missing files!), or incorrect versions (DLL Hell), or even a virus or two creeping into your release? Or even, how can you be sure you are not about to release a debug config file? You can’t.
We all make mistakes, especially when under pressure – don’t put your project, or even your company at risk by not having the ability to generate a release drop (fully unit tested) at a single click.

Support and … Bugs

These are two topics that people quite often treat incorrectly in my opinion.

Support is just that. Support. Helping users, and identifying issues with products. When they encounter a problem with the product that is reproducible, it should be transferred to the technical lead for evaluation, and recorded in the BUG tracker. The eagle eyed among you might have noticed I specifically said bug tracker. Yes, there should be two trackers – if you run a tracker for support that is. Support and bugs MUST be kept separate – you will want your developers to record detailed information against bugs, and manage them (linked to source changes etc naturally), whereas support personnel shouldn’t have to wade through all this information just to find out when it was fixed.
Not to mention that most support people (and customers) don’t know how to create a good bug report anyway; because of this, a technical lead should always evaluate the bugs before they progress – if only for technical accuracy!

Environment

Working environment really is key to development. Some people prefer quite areas, some people like background noise. One thing that everyone agrees is bad is loud noises, or lots of conversations or commotion. These break concentration, and will ultimately drive your devs nuts. Break out areas, and meeting rooms should be used for any significant communication / collaboration – not the middle of the office.
Companies shouldn’t be too strict on preventing people from customising their workspace either – as long as it doesn't go overboard of course! I personally like having a picture of my motorbike around …
Bosses also need to realise that software development is not a 9 to 5, at your desk throughout proposition. We often get problems that we encounter that we just can’t break – and we need to get out of the office, get some fresh air, some peace and quiet. But it’s not that we are bunking off – we are getting away to THINK about the problem at hand. Obviously this is really difficult to police, but if you trust your development team, allowing them flexibility like this can massively increase productivity by allowing them to fully think through and design good solutions to problems.

Training

Always a sticky subject this one; but companies must realise they need to work with developers to keep their skills current – developers hate being out of date (usually!), and if a company keeps them trained they will in return be more loyal.
That, and the last thing you want is developers who can’t use the tooling that they have optimally – if someone can not use a merge tool, they will resort to merging files by hand which costs you (the owner!) money!

 

When I submit a report on a dev team, with suggestions on how to improve their work flow, the most common response I get is “it’ll cost too much”. And almost every time, they are not thinking about things logically – specifically, what are the costs that are occurring now through the poor practices? Bear in mind they might not all be quantifiable now as it can even be things like customer confidence.

Most places that are carrying out development already license MSDN; and that means TFS 2010 is essentially … free, so personally I don’t think there is much of an argument when it comes to cost – and if sys admins moan they need another server, kick them hard. Surely they should know their usage patterns and TFS does not actually need that much power for small dev teams … If they are saying they need another server, it would make me think they haven’t tracked usage on the existing ones properly!

Development Team Commandments

I’m going to try and add to these as  I think of others, but essentially the rules that I see are:

* Only check in code that compiles.
* Be sure to check in all dependencies (yes, even external DLL’s) that are not part of the standard shipping platform. If it doesn't build on a clean machine, you’ve messed up.
* Check in regularly. If its incomplete, use a private branch or shelveset (TFS)
* Not sure about something? Then discuss it with the technical lead.
* Spend the time and write some unit tests for business logic. It will save you time later.

Azure and the 'Aborted' / 'Recycling' Role

I recently uploaded new, very basic, template project into Azure as the basis of something I’m working on, and after the pre-requisite eternity waiting for it to initialise realised that it was cycling between Aborted and Recycling states. Very odd.

So I tried again. Same thing.

I gave up and contact support.

I have to say, they were quick dealing with the request, but the information they gave was … to say the best … rather annoying.

The problem was I used MVC3 they said, and the DLL’s weren't included.

Fair enough.

 

BUT, surely the Azure packaging tooling should be more intelligent? If Microsoft’s own libraries are not preloaded on the platform (only .NET 4 is from what the engineer said), then surely the packaging tool should identify references that are not present and either warn or include them. Not just carry on happily.

Not only that, the information should be exported to the NEW azure portal, not hidden away. I’m told the information is available via RDP login (which has to be enabled … yes you guessed it … in the configuration when you deploy – so yet another delay!), but surely thats overkill for most web roles?

Backing up .. with RoboCopy

We all need to do backups. But do you do them?

I have lots of various machines to backup, and all of them are running to a centralised location (NAS), and then ultimately onto external hard disks for further redundancy. Lots of hassle I know, but worth it in the worst case scenario of something failing.

So … automation becomes key – simply so you don't have to manually drag and drop files every day / week / month / whatever your backup frequency is.

Microsoft have had a great, Robust File Copy (Robocopy) command line application out there for a while, and it truly is amazing. Most importantly it can handle partial copys / resuming – that means it is ideal for making backups – and it will only copy the changes Smile

There are a LOT of options for Robocopy – you can get a complete list by opening a command prompt and typing in “robocopy /?” and hitting enter. Be prepared to scroll.

The ones that I find the most important are:

/s = Copies subdirectories (but only if they have content)
/e = Copies subdirectories (even if they are empty)
/z = Copy using restartable mode (so if you have to stop it, you can resume)
/purge = Deletes files from the target if they no longer exist on the source
/v = Verbose logging – I like to see what's happening

Other than that, you use it the same as you would a file copy – there is support for jobs (i.e. you can save preset copy parameters and reuse them very easily), but I’ll leave that you to explore Smile

.NET 4.0-Platform Update 1

Today saw the release of what is, effectively, .NET 4.1 – although for some rather odd reason Microsoft have decided to name it slightly differently --- .NET 4.0 Platform Update 1. Now there is a mouthful (and no doubt going to be highly confusing to end users when you ask them to install the .NET 4.0 Platform Update 1 runtime …)

So what’s new? Well, if you are not using Windows Workflow Foundation, it seems you might as well skip this update – as that’s what the changes are in. But then again, there are some very interesting changes here, with the addition of state machine workflows (as well as SQL backed persistence, which is supported in Azure).

Although this update is not really going to apply to many general .NET developers, what annoys me is the naming. And it seems I’m not alone. Why on earth someone had the bright idea to come up with this insane name I really don’t know. And to release it as three packages too.

I wonder, are we going to see the demise of the good old major minor release build style version numbers for something more freehand ? If we do, I think its a step in the wrong direction, and walking towards creating a versioning / distribution hell …

Team Foundation Server 2010-Process Template Editor

I can almost guarantee that if you use TFS, you will need to edit a process template sooner or later; the default forms that TFS provides, although good, always need tweaked to fit how your team works.

I even find the EMC Scrum pack needs tweaked at times (I mean, why is there no assigned to for a bug??).

So, the easiest way to do this is to ensure you have the Team Foundation Power Tools installed, fire up Visual Studio them click Tools, and select Process Editor – then you get to choose what you want to edit!

image

The most common one I end up editing are Work Item Types – and specifically, I tend to cheat and edit them through this tool on the server.

Now, be sure to abide by all the warnings when editing process templates. These changes kick in immediately, and effect EVERYONE on the dev team using this project. You have been warned.

Also remember to export any modifications and re-import them on other project collections that use the same template for consistency.

Cloud Computing-Too risky?

Amazon recently posted their response to the outage that hammered their EC2 platform lately. It would seem that the outage itself was triggered by a piece of network maintenance that was not carried out properly, which in turn triggered a rather catastrophic chain of events within the custom Amazon systems. Ultimately, it resulted in data loss, as well as down time, for many customers – such as Heroko who posted their own post-mortem of the incident here.

Microsoft Azure also suffered problems recently – with parts of their system becoming unavailable. The first in March was blamed on an OS upgrade that went awry, then in April there was an Azure Storage outage – for which I’ve not actually seen any real detail on the cause (if anyone has a link, please point me to it – I’d love to know what happened). However, I think the stark contrast between these two vendors is the transparency and information given – both at the time and after the fact.

Amazon have gone the whole hog, totally admitting the fault, identifying exactly (in full Technicolor) the issues that occurred and have resolved themselves to – publically – fix it. And they have issued a decent amount of compute time refund. Microsoft? Well, I’ve not heard of any refunds – even partial ones – for the outages that occurred on their platform. I’ve also not heard of any refunds related to outages on another of their cloud platforms – Business Productivity Online Suite – either, which has had it own problems of late. So is using cloud technology too risk? In a  nutshell, no, as long as you are sensible. I can’t say that I would advocate putting everything in the cloud unless its totally stateless and can operate if any SQL instances etc disappear. If you need to store state, or anything really sensitive, I still prefer the hybrid model, but I guess that because they need to do more to convince me that they are as secure as they proclaim to me. The biggest fault with people using clouds to date and suffering outages is quite simply education. They have put applications up into the cloud and expect them to be highly available. That’s not the case. Unfortunately you still need to understand the requirements of highly available design, and be sure to implement them – including setting your application up in different zones / regions – and ideally, different geographical locations! If you don't, all you really are doing is running a small cluster after all. I know that many people will be screaming about the EC2 outage in particular where this was caused by human error. But I’d love to see them do better in their own data centre. Human error occurs everywhere, but where do you think the resources (i.e. skills AND money) are to mitigate them better? On premise with yourself, or out in a cloud?

Team Foundation Server-E-Mail Alerts

I love things that remind me to do things. I’m a forgetful person, and I need prompts. So I guess that's why I’m a big fan of e-mail alerts, and one of the first things I do in TFS is configure them.

The only annoyance I’ve ever hit with TFS alerts is the rather strange absence of ability to setup the authentication for the SMTP server used for the alerts – I like to use an external server, as for a lot of the jobs I’m involved in, the users are very widely geographically dispersed, and all on different providers.

While you can correct this shortcoming by editing the TFS web services config file (found at C:\Program Files\Microsoft Team Foundation Server 2010\Application Tier\Web Services\web.config), I don’t like this approach, as I feel that it is risky – you never know if something will overwrite this file, especially a service pack etc.

So, what do you do?

Simple, install the SMTP Server Feature of IIS on the server, and run a locally restricted SMTP Service that redirects to your smart host.

image

Once you have installed the SMTP Server, its important that you secure it – ideally you want to set the only machine granted relay permissions as the local machine (127.0.0.1). Then go to the delivery tab, and click Advanced. Specify the details of your external smart host – if you need to provide authentication, you will find the relevant options under the Outbound Security button.

Then, open up Team Foundation Server Administration Console, click on Application Tier, and then Alert Settings (over on the right). Fill in the boxes, and away you go.

image

Oh, and one last thing – make sure the SMTP Service is running!

The final step is to use the excellent Alerts Explorer tool that is in the TFS PowerTools pack to setup your alerts.

image

Team Foundation Server-Automated Backups

One of the things that never ceases to make me smile is the number of companies running Microsoft’s Team Foundation Server software … who don’t back it up.

For those that don’t know, TFS can be looked at as a central store for pretty much all the work that goes on inside a software company. Neglecting to back it up is opening yourself to disaster.

As the TFS Databases are nothing more than SQL Databases, you can back them up in the normal SQL way, or use a tool (there are multiple databases, and you have to get them all at the same time and in the same state – not always easy to achieve). My favourite tool of choice for handling these backups is actually part of the Microsoft Team Foundation Server PowerTools, and integrates neatly into the TFS Administration Console.

The first step after installing the tools is to Create Your Backup Plan.

image

Some things that you will need before you start are:

* A network location where you want your backups to go
* An idea how long you want your backup retention (defaults to 30 days)
* An idea of how you want to schedule your backups – the default nightly runs at 2am local time

Now, my TFS Server doubles up as the main fileserver, so I cheated and entered a local path in the Network Backup Path. (These in turn are synced off to a remote device nightly). This, although being accepted, failed the Readiness check – as its not a network path.

One strange gotcha, I chose to run Full and Transactional backups – leaving Differential off, and you have to uncheck any checked day selection boxes before you can continue.

The other thing that caught me out was the Grant Backup Permissions and Backup Tasks Verification steps were failing, saying that my own account did not have suitable rights for the backup location (strange, as I’m an Admin, and I have full writes to both the NTFS folder and the share). After checking the TFS and SQL Server logs, it was a problem that my target share had a space in it. Putting quotes round it doesn't help either, just causes something else to fail.

And the third, and final thing? Don’t use Local_System account. Remember to setup your own, restricted where possible, account for all services.

image