Upper Management Support the Key to Success? No.

You have heard this one before.  They key to project success is “Upper Management Support”.  I hear the phrase so much it is pretty much a cliche, right up there with “be aligned with the business”.  It ranks right up there with “brush your teeth in the morning” and “exercise if you want to be healthy”.

So I have a humble request.  First of all, I think we can just start assuming that if you have a goal that requires more than, say, the amount of money you spend annually on copiers in a small office annually, you are going to need upper management support.  Especially if it is going to change the business.  If you are engaging on a “change the company for the better initiative”, getting upper management support is basically the first, and one of the easier, steps.

  • The real hard stuff isn’t upper management, it’s middle management.  Upper management does not usually have their empires or jobs threatened by a significant IT initiative.  Middle management, on the other hand, often does.
  • The day to day impact of a large programme on the lives of upper management is comparatively low.  Most of the time, they can get on with their normal duties and continue to operate at the “strategic” level rather than hands on.  Middle management, on the other hand, often has to expend significant time, energy, and political capital in the programme in order to make sure it works.
  • The numbers of people in upper management are lower, well, because they are upper management.  You can more easily get a small number of people on the same page.  Middle management numbers are much higher, and making the politics of middle management engagement much more difficult.
  • Various groups in middle management that are impacted unevenly create opportunities for internecine politics to enter the scene.  Even if you get middle management engagement, keeping it is a much more difficult chore.

There are all sorts of things you need in order to make a large programme of transformational work successful.  Upper management support is just the “table stakes” – the ante for getting to the table.  Getting support of middle management, and getting past the many roadblocks they can put in your way, is much harder, frequently underestimated, and frankly, in my experience, much more of a project success factor.

Upper Management Support the Key to Success? No.

The “Dark Matter” of Technical Debt: Enterprise Software

Bespoke software is expensive. As we all well know, it is risky to build, technical debt can easily creep in, and you can easily end up with a maintenance nightmare. And software developers, well – we all know they are hard to work with, they tend to have opinions about things, and did I mention, they are expensive?

The argument has always been that with purchased software, you get an economy of scale because you share the software with others. Of course, this works out well most of the time – nobody should ever be developing their own internal commodity software (think operating systems, databases, and other “utilities”).

However, not all software is “utility”. There is a continuum of types of software, going from something like Microsoft Windows or Linux on one end, which nobody in their right mind would write, and company specific applications of all kinds that have zero applicability outside of a given, well, “Enterprise”. The software I am talking about in this post lies somewhere in the middle of these extremes.

Almost anyone who does work in corporate IT has probably encountered one of these systems. The following traits commonly pop up:

  • It is oriented at a vertical market.  The number of customers is often measured in 10s or 100s.
  • The cost for purchase is usually measured with at least 6 figures in USD.
  • It usually requires significant customization – either by code, or by a byzantine set of configuration options.
  • It was almost certainly sold on a golf course, or in a steak house.
  • You usually need their own consultants to do a decent installation.  The company that sells the software has professional services revenues at or higher than the software license revenues.

It is my observation that software in this category almost always is loaded with technical debt.  Technical debt that you can’t refactor.  Technical debt that becomes a permanent fixture of the organization for many years to come.  Enterprise Software – especially software sold as “Enterprise Software” to non-technical decision makers, is more often than not, a boat-anchor that holds organizations back, adding negative value.

Why is this?  Enterprise software is often sold on the basis of flexibility.  A common process, sadly, in the world of package selection, is to simply draw up a list of features, evaluate a set of vendors on the basis of desired features, and balance that against some license cost + implementation cost threshold.  Lip service is given to “cost-of-ownership”, but the incentives in place reward minimizing the perceived future costs.  What this process selects for is a combination of maximum flexibility, moderate license cost relative to a build (but often high), and minimized estimates of implementation cost.  Even if one company bucks the trend, the competitive landscape always selects for things in this direction.

Why is that true?  We don’t assess the technical debt of enterprise software.  I have seen a lot of buy versus build analysis in my years as a technology consultant, and not once did I see something that assessed the internal quality of the solution.  Enterprise software is bought based on external features, not internal quality.  Nobody asks about cyclomatic complexity or afferent coupling on the golf course.

Does internal quality of purchased software matter?  Absolutely.  In spades.  It is hardly uncommon for companies to start down a path of packaged software implementation, find some limitation, and then need to come to an agreement to customize the source code.  Rarely does anyone have the intent to take on source when the software is purchased, but frequently, it happens anyway when the big hairy implementation runs into difficulty.  But even if you never take possession of the source code, the ability for you to get any upgrades to the solution will be affected by the packaged software vendor’s ability to add features.  If the internal quality is bad, it will affect the cost structure of the software going forward.  APIs around software that has bad internal quality tend to leak out that bad quality, making integration difficult and spreading around the code smells that are presumably supposed to be kept “inside the black box”.

What is the end result?  Package implementations that end up costing far in excess of what it would have been to build a piece of custom software in the first place.  Lots of good money thrown after bad.  Even when the implementation works, massive maintenance costs going forward.  It gets worse though.  The cost of the last implementation often colors the expectations for what the replacement should cost, which tends to bias organizations towards replacing one behemoth nasty enterprise software package with something equally as bad.  It is, what the French like to call, a fine mess.

So what is the solution?  We need to change how we buy enterprise software.  The tools we have for buy versus build analysis are deficient – as few models include a real, robust cost-of-ownership analysis that properly includes the effects of insufficient internal quality.  It is amazing that in this day and age, when lack of proper due diligence in package selection can cost an organization literally billions of dollars, that so little attention is paid to internal quality.

What would happen?  There would be a renewed incentive to internal quality.  Much of today’s mediocre software would suddenly look expensive – providing room for new solutions that are easier to work with, maintain, and provide more lasting business value.  More money could be allocated to strategic software that uniquely helps the company, providing more space for innovation.  In short, we would realize vastly more value out of our software investments than we do today.

The “Dark Matter” of Technical Debt: Enterprise Software

F# Based Discriminated Union/Structural Similarity

Imagine you have a need to take one type, which may or may not be a discriminated union, and see if it “fits” inside of another type.  A typical case might be whether one discriminated union case would be a possible case for a different discriminated union.  That is, could the structure of type A fit into the structure of type B.  For lack of a better word, I am calling this “structural similarity”.

Lets start with some test cases:

module UnionTypeStructuralComparisonTest
open StructuralTypeSimilarity
open NUnit.Framework

type FooBar =
 | Salami of int
 | Foo of int * int
 | Bar of string

type FizzBuz =
 | Toast of int
 | Zap of int * int
 | Bang of string

type BigOption =
 | Crap of int * int
 | Bang of string
 | Kaboom of decimal

type Compound =
 | Frazzle of FizzBuz * FooBar
 | Crapola of double

type PersonalInsultTestCase() =

 member this.BangCanGoInFooBar() =
 let bang = Bang("I like cheese")
 Assert.IsTrue(bang =~= typeof<FizzBuz>)
 Assert.IsTrue(bang =~= typeof<FooBar>)
 Assert.IsTrue(bang =~= typeof<BigOption>)

 member this.KaboomDecimalDoesNotFitInFizzBuz() =
 let kaboom = Kaboom(45m)
 Assert.IsFalse(kaboom =~= typeof<FizzBuz>)

 member this.SomeStringCanBeFooBar() =
 let someString = "I like beer"
 Assert.IsTrue(someString =~= typeof<FooBar>)

 member this.SomeFoobarCanBeString() =
 let someFoobar = Bar("I like beer")
 Assert.IsTrue(someFoobar =~= typeof<string>)

 member this.SomeFoobarTypeCanBeString() =
 Assert.IsTrue(typeof<FooBar> =~= typeof<string>)

 member this.CompoundUnionTest() =
 let someCompound = Frazzle(Toast(4),Salami(2))
 Assert.IsTrue(someCompound =~= typeof<FooBar>)

To make this work, we are going to need to implement our =~= operator, and then do some FSharp type-fu in order to compare the structure:

module StructuralTypeSimilarity

open System
open Microsoft.FSharp.Reflection
open NLPParserCore

let isACase (testUnionType:Type) =
 |> FSharpType.GetUnionCases
 |> Array.exists(fun u -> u.Name = testUnionType.Name)
let caseToTuple (case:UnionCaseInfo) =
 let fields = case.GetFields()
 if fields.Length > 1 then
 |> Array.map( fun pi -> pi.PropertyType )
 |> FSharpType.MakeTupleType

let rec UnionTypeSourceSimilarToTargetSimpleType (testUnionType:Type) (targetType:Type) =
 if (testUnionType |> FSharpType.IsUnion)
   && (not (targetType |> FSharpType.IsUnion)) then
 if testUnionType |> isACase then
 let unionType = testUnionType
  |> FSharpType.GetUnionCases
  |> Array.find(fun u -> u.Name = testUnionType.Name)
 let myCaseType = caseToTuple unionType
 myCaseType =~= targetType
 |> FSharpType.GetUnionCases
 |> Array.map( fun case -> (case |> caseToTuple) =~= targetType )
 |> Array.exists( fun result -> result )
 raise( new InvalidOperationException() )

and UnionTypeSourceSimilarToUnionTypeTarget (testUnionType:Type) (targetUnionType:Type) =
 if (testUnionType |> FSharpType.IsUnion)
  && (targetUnionType |> FSharpType.IsUnion) then
 if testUnionType |> isACase then
 |> FSharpType.GetUnionCases
 |> Array.map( fun u -> u |> caseToTuple )
 |> Array.map( fun targetTuple -> testUnionType =~= targetTuple )
 |> Array.exists( fun result -> result )
 |> FSharpType.GetUnionCases
 |> Array.map( fun case -> (case |> caseToTuple) =~= targetUnionType )
 |> Array.exists( fun result -> result )
 raise( new InvalidOperationException() )

and SimpleTypeSourceSimilarToUnionTypeTarget (testSimpleType:Type) (targetUnionType:Type) =
 if (not (testSimpleType |> FSharpType.IsUnion))
  && (targetUnionType |> FSharpType.IsUnion) then
 |> FSharpType.GetUnionCases
 |> Array.map( fun u -> u |> caseToTuple )
 |> Array.map( fun targetTuple -> testSimpleType =~= targetTuple )
 |> Array.exists( fun result -> result )
 raise( new InvalidOperationException() )

and SimpleTypeSourceSimilarToSimpleTypeTarget (testSimpleType:Type) (targetSimpleType:Type) =
 if (testSimpleType |> FSharpType.IsTuple) && (targetSimpleType |> FSharpType.IsTuple) then
 let testTupleTypes = testSimpleType |> FSharpType.GetTupleElements
 let targetTupleTypes = targetSimpleType |> FSharpType.GetTupleElements
 if testTupleTypes.Length = targetTupleTypes.Length then
 let matches = Array.zip testTupleTypes targetTupleTypes
 |> Array.map( fun(test,target) -> test =~= target )
 not (matches |> Array.exists( fun result -> not result ))
 testSimpleType = targetSimpleType

and (=~=) (testObject:obj) (targetType:Type) =
 let objIsType (o:obj) =
 match o with
 | :? Type -> true
 | _ -> false

 let resolveToType (o:obj) =
 match objIsType o with
 | true -> o :?> Type
 | false -> o.GetType()
 let testObjectIsAType = testObject |> objIsType
 let testObjectTypeIsUnion =
 match testObjectIsAType with
 | true -> testObject |> resolveToType |> FSharpType.IsUnion
 | false -> false
 let targetTypeIsAUnion = targetType |> FSharpType.IsUnion 

 let resolvedType = testObject |> resolveToType

 match testObjectIsAType,testObjectTypeIsUnion,targetTypeIsAUnion with
 | false, _, _ -> resolvedType =~= targetType
 | true,true,false -> UnionTypeSourceSimilarToTargetSimpleType resolvedType targetType
 | true,false,false -> SimpleTypeSourceSimilarToSimpleTypeTarget resolvedType targetType
 | true,true,true -> UnionTypeSourceSimilarToUnionTypeTarget resolvedType targetType
 | true,false,true -> SimpleTypeSourceSimilarToUnionTypeTarget resolvedType targetType

Getting this to work seemed harder than it should.  While my tests pass, I am sure there are both cases I have not yet covered, and probably some simpler ways I could accomplish some of the same goals.

While this is a work in progress, if anyone has any thoughts for simpler ways to do something like this, I am all ears.

F# Based Discriminated Union/Structural Similarity

Using Dynamic with C# to read XML

On April 10th (less than 1 week away), I am doing an updated version of my talk at Twin Cities Code Camp about using dynamic with C#.

One core technique I am seeking to demonstrate is to use the concept of a dynamic XML reader as a more human readable way to use XML content in from C# or any other dynamic language.

Consider the following usage scenarios:


What we would like is an object that will use dynamic in C# to make it so that we can read XML without having to think about all the nasty mechanics of search, XPath, and other stuff that isn’t “I am looking for the foobar configuration setting” or whatever it is we are looking for in the XML we want to look at.  The following is the basic spiked implementation:


The real hard part was figuring out the mechanics of how DynamicMetaObject actually does it’s work.  Doing a dynamic object, if you are not going to do it the easy way and simply inherit from dynamicobject, means you are going to write two classes:

  • Something that implements IDynamicMetaObjectProvider
  • Something that inherits from DynamicMetaObject

The job of IDynamicMetaObjectProvider – at least as far as I can tell – is simply to point to the right DynamicMetaObject implementation, and somehow associate something with it that will potentially drive how the dynamic object will respond to various calls.  Most of the interesting stuff happens in DynamicMetaObject, where we get to specify how various kinds of bindings will work.

In a simple case like this where we are doing everything with properties, we merely need to override BindGetMember.  The return value of BindGetMember will generally be another DynamicMetaObject derived class.

Generally, DynamicMetaObjects take three parameters on construction:

  • A target expression
  • A binding restriction (something that tells the object a little about what it has – not entirely sure why…)
  • The actual thing to pass back

In the way I am using this, there are three main ways we return.  If we find we can resolve the property call to be a string, we are just going to wrap the string in a constant expression, specify a type restriction on string, and send that back.  If we are resolving to another XElement that has sub-items – we are going to wrap the XElement into a new DynamicXmlMetaObject – as a means to allow for further dot notation to get at sub-elements.  Lastly, if we have a group of the same item, we are going to return an array of either wrapped strings, or wrapped DynamicXmlMetaObjects.  Managing through these three cases is where most of the complexity is.

This is a work in progress – and I have already been told by some that this is a bad idea (i.e. “why not XPath?”, “that looks dangerous”, and “what if the Republicans use this?”).  But certainly, for certain kinds of problems, I can definitely see myself using this kind of thing to remove lots of Linq to XML plumbing code! (note, some work to integrate this and perhaps somehow combine this with XPath queries will probably happen).

Using Dynamic with C# to read XML

Just Make Me Think! The Best Technologies Force Hard Choices.

In thinking about what is so compelling about certain new technologies that have emerged in recent years, a common theme is starting to emerge.  The best technologies don’t just do something useful, but they make the user think about the right things that lead to better designs and more robust software.  Lets start by thinking about some of the most compelling technologies or techniques that have had a lot of buzz in the last couple years:

Dependency Injection

Dependency injection is a great technique for being able to manage coupling.  At least it is in practice.  But there is nothing that would stop you from using dependency injection to, say, create a giant class and inject thirtyteen-hundred dependencies in it and giving yourself a maintenance nightmare.

What is important about dependency injection, is that it makes you think about dependencies. When I am writing code, I want creating a dependency from one class to another to hurt.  Not a lot, but enough that it makes me put an entry in a file somewhere – be it an xml config, or a module, or something.  A DI tool that wires everything up for me without me having to explicitly think about it each time – well, that is to me like an HR management tool that does not have an “are you sure” prompt on the “fire employee” button.

Continuous Integration

Continuous Integration tools do some nifty things, one of the most important being providing some visibility to the state of the code.  When properly implemented, the benefits are pretty staggering.  However, in my experience, one of the most important benefits is that Continuous Integration forces you to think about how to automate the deployment process. You can’t continuously integrate if some person has to flip the bozo bit in order for the build to work.  CI promotes a model that makes tools that require human intervention to install seem obsolete.  I can’t see this as a bad thing.

Functional Programming

Most anyone who knows me in a professional context knows I am a big fan of functional programming.  I have grown to like the syntax and expressibility of languages like F#, and there are a great many reasons why the language is important.  But to me, the most striking is that functional languages make you think long and hard about state.  You can do state in F# (through the mutable keyword) – even in pure functional programming languages like Haskell if you implement a state monad.  But the important thing here is that you have to do significant work to create state in these languages.  That leads to choices about state that tend to be more mindful than in languages where state is the default.

I could talk through more examples, but I think you get the picture.  The opposite tends to hold true as well – my main issue with a technology like ASP.NET Webforms is that it makes certain things easy (ViewState, notably) that it should not, in order to help you avoid having to think about the fact that you are running an application over a stateless medium.  When you are considering a new technology that is emerging – don’t think features, think “What choices is this technology forcing me to make?”

Just Make Me Think! The Best Technologies Force Hard Choices.

i4o v2

An update to a project I have been working on for some time, for which the time definitley ripe for an update.

It was an afternoon in 2007 when I was pondering… “Why I am writing the same Dictionary<K,V> collections just for indexing and putting them internal to my collection classes so I could do query optimization.”  At the same time thinking about how Linq could work with such a thing, I decided to go forth and spike out a specialized collection type that would internally implement indexes and just use them when you issue a query to the collection.  The result was the i4o project (codeplex.com/i4o).

If you are not familiar with i4o, the problem it solves is that it takes any IEnumerable<T> in .NET, and allows you to easily put indexes on it such that LINQ queries over the Index rather than doing a “tablescan” style search through the collection.  You would create an IndexableCollection<T> as follows:

var someIndexable = new IndexableCollection<T>(someEnumerable, someIndexSpecification);

At this point, someIndexable wraps the someEnumerable, applying an index to one or more properties, as specified in someIndexSpecification.  You could also inherit from IndexableCollection<T> and get indexing in your own custom collections, such that you could do a query like this:

var someItems = from s in someIndexable where s.SomeProperty == 42 select s;

… and the query would use the index for searching, rather than scan each item in the collection.

Over the years since (approaching 3!) – the project has evolved, little by little.  In 2008, Jason Jarrett started contributing work, bringing in important performance enhancements, better code organization, continuous integration, and various other things, that he elaborates on here.

For 2010, we wanted to do a significant refresh, with some new capabilities for core functionality.  The second major revision of i4o will feature the following:

* Support for INotifyPropertyChanged

Previously, if an indexed property changed, you had to manually notify the index of the change and reindex the item.  For i4o2, we have formally added support for property change notification, such that if T supports INotifyPropertyChanged, changes to any child will result in automatic reindexing of that item.

* Support for Observable<T>

If a collection inherits from Observable<T>, and T also supports INotifyPropertyChanged, you will be able to wrap an index around that which will notice changes in the collection.

* Support for a far wider range of query operations

Previously, if a query was anything other than object.Property == <some constant | some expression>, i4o would simply use Enumerable.Where, rather than try to optimize the query using the index.  The index was based on strict hash tables, which limited the kinds of operations that could be optimized.  In v2, there is a much more sophisticated query processor that:

1.) Optimizes the structure based on what the child type supports.  If the child type is IComparable, it uses a Red-Black tree to hold the index so that queries that use comparison operations rather than equality can be optimized.

2.) Uses a more sophisticated algorithm for visiting the expression tree of the predicate, allowing for indexes to work on much more complex predicates.  So if you are doing a query where your predicate is more complex (A and B and (C or D), for example) the processor will handle those as well.

* Creation via a builder pattern:

Rather than make the user try to figure out what index is best, creating an index is a matter of calling IndexBuilder.BuildIndicesFor with a source collection and a spec:

var someIndex = IndexBuilder.BuildIndicesFor(someCollection, indexSpec);

The builder detects the types passed to it, and simply builds the most optimal index type (ObservableIndexSet or just an IndexSet) based on the type of enumerable passed to it and the child type.

* More DSL-like index specification builder

While this has been supported for some time, we added some language to the builder, so the code would read better:

            var indexSpec = IndexSpecification<SimpleClass>.Build()
                .With(person => person.FavoriteColor)
                .And(person => person.Age)
                .And(person => person.Name);

These changes are very much in early alpha testing right now, with a goal of having them fully baked prior to the Visual Studio 2010 release.  At that time, we intend to move the changes into the project trunk and do a full release.

If you want early access to the changes before then, I encourage anyone who is interested to visit codeplex.com/i4o and see the i4o2 branch of the code repository.  Once we are sufficiently stable and have integrated some of the older perf improvements into the new code base, we will likely move the i4o2 branch to the trunk.

i4o v2

Why IT Matters More Than Ever

In 2003, Harvard Business Review published Nick Carr’s seminal essay, “IT Doesn’t Matter.” I remember that month well. The previous years of the PC boom, followed by the dotcom boom, had seen a tidal wave of money spent on technology. Much money was wasted on heavy investment in systems that either sat unused on a shelf, or ultimately were scrapped. To top it all off, the big “emergency” of the time, the millennium bug, turned out not to be as bad as everyone had feared. The time was rife for someone to declare the strategic irrelevancy of technology, and Mr. Carr stepped in and made the case. Because many executives desperately wanted to reduce technology spending at the time (in the aftermath of all that gluttony), reading Carr’s article was like receiving manna from heaven.

As a technology practitioner myself, however, it should surprise no one that I believe Mr. Carr to be wrong. Regardless of any biases I may have, the reality is that his argument contains several holes that have become ever more glaring with the passage of time. My intent with this article is to shine a light on what’s wrong with the “IT doesn’t matter” argument. I’ll even go further: I’d like to demonstrate why, now more than ever, investing—not just in technology, but in the intersection of technology and people—is the most important action that a company or organization can take.

The Company That Couldn’t Run a Second Shift

In my years as a technology consultant, I’ve certainly seen what happens when you don’t invest in technology. One client I vividly remember—we’ll call it Company X—had quite a conundrum. An amalgamation of nine different manufacturing organizations built up over time, Company X spent very conservatively on technology. When making an acquisition, Company X would do as little as possible on integration; for example, only enough to get the accounting systems and the order-entry systems sending orders to a central mainframe.

Over time, this management behavior at Company X resulted in a “Rube Goldberg” system requiring the work of hordes of COBOL programmers just to keep running. Overnight order processing required more than 200 manual steps. The maintenance cost of this Frankenstein’s monster of a codebase included at least 40 full-time COBOL programmers, a couple of dozen system operators, and who knows how much management hierarchy on top of all that. Company X was spending an estimated $15 million annually just to keep the system at its current level of functionality.

Unfortunately, that $15 million wasn’t even the most significant problem. When demand increased, Company X decided to run a second shift. The issue was that the system literally could not process orders in greater than about a 12-hour window. Moving to a 16-hour window, and eventually to a round-the-clock business, would require either radical changes to the old system or building an entirely new system—and it would have to match the quirks of the old system, in order to maintain compatibility with the nine other satellite systems!

Business analysts estimated that adding a second shift would increase gross margin in this low-margin business by around 3%, while increasing top-line revenue by over 20%. Moving to round-the-clock processing was also a precursor to being able to sell into new global markets, which would have increased top-line revenue even further.

Sadly, none of these happy improvements could come to pass for Company X.

The problems at this company were many, of which technology was merely one aspect. The deeper issue was that the operators and programmers who understood that mess of a codebase had a vested interest in keeping the current technology set running. But even if all those people were replaced, technical debt in Company X had built up over a long period of time, to the point that the company would need to invest around 2–3 times its annual profit in order to get out from under the load of technical debt.

In this culture of hubris, fear, and inertia, any new technology initiative to pay off the technical debt would almost certainly be killed. It was literally a recipe for company bankruptcy. When competitors are blowing you out of the water with 3% higher margins and 20% higher top-line revenue, taking that increased profitability and investing it into further productivity-enhancing technologies, it’s hard to see why “IT doesn’t matter.” If you’re a CEO in a company that can’t compete because of technical debt, you almost certainly will understand just how much IT does matter.

The rest of this article is available at Pearson Education’s InformIT website here.

Why IT Matters More Than Ever