This post is the first what I intend to be an open-ended series of posts about my current experience working with a pet development project for learning purposes.

Not long ago, I set myself the task of wanting to experiment with some new/upcoming projects in terms of .NET development. The original intention started off basically as me being interested in finding out what they were and how they were built.

More lately, however, I’ve given myself a pet-project with the task of building it using TDD and more importantly, to try out these different projects and work out a solution which i’m comfortable has zero-friction (or very little). I’m not asking a lot – just for proper SoC and at the end i’d (theoretically) have a project which is a first-round example of TDD and proper design.

  • I want to provide multiple UIs for the core application (Web, Win, WPF) – I’ve seen a number application architectures which employ practices to separate presentation from logic, but ALL of them either failed over time or never actually tried moving to a different presentation platform (at which point they too probably would have failed). For this reason, I wanted to build something which (from day zero) has been built in parallel with several UIs
  • I want to be able to rip out the database and put a new one in – This is more academic than anything else. IMO there are a lot of unsubstantiated claims floating around the software world that proper abstraction will *easily* allow you to just swap datastores on a whim. Well I want to put that to the test.

So in pursuit of these goals, i’ve started writing my simple note-taking application, and over a very disparate period of weeks i’ve done a lot of reading and discovering of tools for the ORM, migration and object model.

In terms of data migration, I was impressed with what MigratorDotNET was capable of. It basically allowed me to speak one programming language, and with a little NAnt scripting I was able to generate a database migration system.

using System;
using System.Data;
using Migrator.Framework;

namespace DBMigrations
	public class _0001_CreateNoteTable : Migration
		public override void Up()
			Database.AddTable("Note", new Column[]
              		new Column("NoteId", DbType.Guid, ColumnProperty.PrimaryKey | ColumnProperty.NotNull, "NEWID()"),
              		new Column("NoteTitle", DbType.String, 64, ColumnProperty.NotNull),
              		new Column("NoteData", DbType.String, 32768)

		public override void Down()

Not that I strictly had a major problem with the thought behind the approach offered by MigratorDotNET, but the two things that I didn’t like about the system:

  • Database schema names are defined as strings – any typo’s aren’t picked up at compile time, they’re only picked up at unit-test time (if you’re diligent). You could work-around this by using string constants to define the column and table names, but then you’d had to maintain that list too.
  • Changes to the database had to be run independently of the application. In my case i’d used the NAnt provider to perform migration, but I had the intention of integrating it into the application so that it could self-upgrade without the need for an external component.

At the time I started the project, this system seemed fine, and I was able to write tests for the migration using a base generic class:

	public class MigrationTester<T%gt; where T: Migration, new()
		public virtual void AssertMigrateUp(T migration) {}
		public virtual void AssertMigrateDown(T migration) { }

		public void Test_Migrate()
			MockRepository repos = new MockRepository();
			ITransformationProvider mockDB = (ITransformationProvider)repos.Stub(typeof(ITransformationProvider));

			T migration = new T()
				Database =

			Exception thrownException = null;

			catch (Exception ex)
				thrownException = ex;
				if (thrownException != null)
					throw new Exception(String.Format("Failed to migrate up and down for migration {0}", typeof(T).ToString()), thrownException);

	public class _0001_CreateNoteTableTest : MigrationTester<_0001_CreateNoteTable>
		public override void AssertMigrateUp(_0001_CreateNoteTable migration)
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteId"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteTitle"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteData"));
			Assert.IsTrue(migration.Database.ColumnExists("Note", "NoteId"));
			// ...etc... //

So the first part was done – I had a system for migrating my database and a means for testing it.

The great SQL Performance saga continued today, taking a turn for the….ah…..different…


  • The servers are exhibiting high disk I/O activity.
  • So much disk activity, that all queries are nearly brought to a grinding halt while the disks are thundering away.
  • The Query Plans for all I/O intense queries use index seeks – no scanning at all.

This leads me to conclude that the behaviour of the server is fine, and that the system is doing everything expected of it. Which leads me to ask the question why is an index seek taking so long to run off the disk?

Well it turns out there is in fact a way to determine the level of fragmentation of SQL indexes.

I’d written a simple query based on this content to calculate the amount of fragmentation on the live server’s indexes, and needless to say i’m a bit shocked. I sincerely thought this is something that the “qualified” DBA consultant on the other end was capable of doing, however in his defense he might have forgotten to run it and then moved onto the next high-paying job, leaving us to sweep up the shambles.

I’m no DBA….Although if I’m correct i’ll be justified in calling myself one and, i’ll also have a few more letters to add to the end of my job title and hopefully a zero to throw on the end of my salary….all for doing SFA.

FWIW, here is the query which revealed all the gory details:

select	i.object_id, i.index_id,, s.avg_fragmentation_in_percent, s.avg_fragment_size_in_pages
from	sys.indexes i 
JOIN	sys.dm_db_index_physical_stats(DB_id('DATABASE_NAME_GOES_HERE'), null, null,null, null) s
	on i.object_id = s.object_id 	
	AND i.index_id = s.index_id
	where     (
		s.avg_fragmentation_in_percent > 0 
		OR s.avg_fragment_size_in_pages > 0
	AND i.index_id <> 0

This is really just a stub post, and should force me to come back and add more detail to the topic.

In a nutshell, be careful if you have parallelism turned ON for your multi-core SQL server as SQL’s internal worker threads can seemingly jam themselves into a deadlock scenario when executing a parallelised (sp?) query.

to disable parallelism, use the OPTION (MAXDOP 1) query hint.

SELECT col1, col2, col3
FROM table1 t1 INNER JOIN table 2 t2 on =
WHERE SomeCondition = SomeValue

This will force SQL Server to run the query with a max parallelism of 1 (ie: single thread). Good-bye locks!

Some references:
Query Hint (T-SQL)
Appropriate Uses of Parallelism in SQL Server
Detecting and Ending Deadlocks
Deadlock Troubleshooting (Part 1)

Wow lots has happened in the last 4 hours. Where do i start?

  • In order to diagnose a locking/blocking issue on SQL server, start with an sp_who2 to give you the list of active PIDS, and the PIDS blocking requests. Then execute DBCC INPUTBUFFER(<BLOCKING PID>) to find the offending SQL
  • In order to free space on a critical HDD, you can move the data over to a new location on the same machine and create a junction (a symoblic link for the *nix folk). New feature of NTFS 3.0 and IMO not used anywhere near as heavily as it should. Most underrated feature in Windows.
  • It’s hard to come across a good music compilation, but by God someone has managed to do it