Monthly Archives: April 2010

The “Dark Matter” of Technical Debt: Enterprise Software

Bespoke software is expensive. As we all well know, it is risky to build, technical debt can easily creep in, and you can easily end up with a maintenance nightmare. And software developers, well – we all know they are hard to work with, they tend to have opinions about things, and did I mention, they are expensive?

The argument has always been that with purchased software, you get an economy of scale because you share the software with others. Of course, this works out well most of the time – nobody should ever be developing their own internal commodity software (think operating systems, databases, and other “utilities”).

However, not all software is “utility”. There is a continuum of types of software, going from something like Microsoft Windows or Linux on one end, which nobody in their right mind would write, and company specific applications of all kinds that have zero applicability outside of a given, well, “Enterprise”. The software I am talking about in this post lies somewhere in the middle of these extremes.

Almost anyone who does work in corporate IT has probably encountered one of these systems. The following traits commonly pop up:

  • It is oriented at a vertical market.  The number of customers is often measured in 10s or 100s.
  • The cost for purchase is usually measured with at least 6 figures in USD.
  • It usually requires significant customization – either by code, or by a byzantine set of configuration options.
  • It was almost certainly sold on a golf course, or in a steak house.
  • You usually need their own consultants to do a decent installation.  The company that sells the software has professional services revenues at or higher than the software license revenues.

It is my observation that software in this category almost always is loaded with technical debt.  Technical debt that you can’t refactor.  Technical debt that becomes a permanent fixture of the organization for many years to come.  Enterprise Software – especially software sold as “Enterprise Software” to non-technical decision makers, is more often than not, a boat-anchor that holds organizations back, adding negative value.

Why is this?  Enterprise software is often sold on the basis of flexibility.  A common process, sadly, in the world of package selection, is to simply draw up a list of features, evaluate a set of vendors on the basis of desired features, and balance that against some license cost + implementation cost threshold.  Lip service is given to “cost-of-ownership”, but the incentives in place reward minimizing the perceived future costs.  What this process selects for is a combination of maximum flexibility, moderate license cost relative to a build (but often high), and minimized estimates of implementation cost.  Even if one company bucks the trend, the competitive landscape always selects for things in this direction.

Why is that true?  We don’t assess the technical debt of enterprise software.  I have seen a lot of buy versus build analysis in my years as a technology consultant, and not once did I see something that assessed the internal quality of the solution.  Enterprise software is bought based on external features, not internal quality.  Nobody asks about cyclomatic complexity or afferent coupling on the golf course.

Does internal quality of purchased software matter?  Absolutely.  In spades.  It is hardly uncommon for companies to start down a path of packaged software implementation, find some limitation, and then need to come to an agreement to customize the source code.  Rarely does anyone have the intent to take on source when the software is purchased, but frequently, it happens anyway when the big hairy implementation runs into difficulty.  But even if you never take possession of the source code, the ability for you to get any upgrades to the solution will be affected by the packaged software vendor’s ability to add features.  If the internal quality is bad, it will affect the cost structure of the software going forward.  APIs around software that has bad internal quality tend to leak out that bad quality, making integration difficult and spreading around the code smells that are presumably supposed to be kept “inside the black box”.

What is the end result?  Package implementations that end up costing far in excess of what it would have been to build a piece of custom software in the first place.  Lots of good money thrown after bad.  Even when the implementation works, massive maintenance costs going forward.  It gets worse though.  The cost of the last implementation often colors the expectations for what the replacement should cost, which tends to bias organizations towards replacing one behemoth nasty enterprise software package with something equally as bad.  It is, what the French like to call, a fine mess.

So what is the solution?  We need to change how we buy enterprise software.  The tools we have for buy versus build analysis are deficient – as few models include a real, robust cost-of-ownership analysis that properly includes the effects of insufficient internal quality.  It is amazing that in this day and age, when lack of proper due diligence in package selection can cost an organization literally billions of dollars, that so little attention is paid to internal quality.

What would happen?  There would be a renewed incentive to internal quality.  Much of today’s mediocre software would suddenly look expensive – providing room for new solutions that are easier to work with, maintain, and provide more lasting business value.  More money could be allocated to strategic software that uniquely helps the company, providing more space for innovation.  In short, we would realize vastly more value out of our software investments than we do today.

F# Based Discriminated Union/Structural Similarity

Imagine you have a need to take one type, which may or may not be a discriminated union, and see if it “fits” inside of another type.  A typical case might be whether one discriminated union case would be a possible case for a different discriminated union.  That is, could the structure of type A fit into the structure of type B.  For lack of a better word, I am calling this “structural similarity”.

Lets start with some test cases:

module UnionTypeStructuralComparisonTest
open StructuralTypeSimilarity
open NUnit.Framework

type FooBar =
 | Salami of int
 | Foo of int * int
 | Bar of string

type FizzBuz =
 | Toast of int
 | Zap of int * int
 | Bang of string

type BigOption =
 | Crap of int * int
 | Bang of string
 | Kaboom of decimal

type Compound =
 | Frazzle of FizzBuz * FooBar
 | Crapola of double

[<TestFixture>]
type PersonalInsultTestCase() =

 [<Test>]
 member this.BangCanGoInFooBar() =
 let bang = Bang("I like cheese")
 Assert.IsTrue(bang =~= typeof<FizzBuz>)
 Assert.IsTrue(bang =~= typeof<FooBar>)
 Assert.IsTrue(bang =~= typeof<BigOption>)

 [<Test>]
 member this.KaboomDecimalDoesNotFitInFizzBuz() =
 let kaboom = Kaboom(45m)
 Assert.IsFalse(kaboom =~= typeof<FizzBuz>)

 [<Test>]
 member this.SomeStringCanBeFooBar() =
 let someString = "I like beer"
 Assert.IsTrue(someString =~= typeof<FooBar>)

 [<Test>]
 member this.SomeFoobarCanBeString() =
 let someFoobar = Bar("I like beer")
 Assert.IsTrue(someFoobar =~= typeof<string>)

 [<Test>]
 member this.SomeFoobarTypeCanBeString() =
 Assert.IsTrue(typeof<FooBar> =~= typeof<string>)

 [<Test>]
 member this.CompoundUnionTest() =
 let someCompound = Frazzle(Toast(4),Salami(2))
 Assert.IsTrue(someCompound =~= typeof<FooBar>)

To make this work, we are going to need to implement our =~= operator, and then do some FSharp type-fu in order to compare the structure:

module StructuralTypeSimilarity

open System
open Microsoft.FSharp.Reflection
open NLPParserCore

let isACase (testUnionType:Type) =
 testUnionType
 |> FSharpType.GetUnionCases
 |> Array.exists(fun u -> u.Name = testUnionType.Name)
let caseToTuple (case:UnionCaseInfo) =
 let fields = case.GetFields()
 if fields.Length > 1 then
 fields
 |> Array.map( fun pi -> pi.PropertyType )
 |> FSharpType.MakeTupleType
 else
 fields.[0].PropertyType 

let rec UnionTypeSourceSimilarToTargetSimpleType (testUnionType:Type) (targetType:Type) =
 if (testUnionType |> FSharpType.IsUnion)
   && (not (targetType |> FSharpType.IsUnion)) then
 if testUnionType |> isACase then
 let unionType = testUnionType
  |> FSharpType.GetUnionCases
  |> Array.find(fun u -> u.Name = testUnionType.Name)
 let myCaseType = caseToTuple unionType
 myCaseType =~= targetType
 else
 testUnionType
 |> FSharpType.GetUnionCases
 |> Array.map( fun case -> (case |> caseToTuple) =~= targetType )
 |> Array.exists( fun result -> result )
 else
 raise( new InvalidOperationException() )

and UnionTypeSourceSimilarToUnionTypeTarget (testUnionType:Type) (targetUnionType:Type) =
 if (testUnionType |> FSharpType.IsUnion)
  && (targetUnionType |> FSharpType.IsUnion) then
 if testUnionType |> isACase then
 targetUnionType
 |> FSharpType.GetUnionCases
 |> Array.map( fun u -> u |> caseToTuple )
 |> Array.map( fun targetTuple -> testUnionType =~= targetTuple )
 |> Array.exists( fun result -> result )
 else
 testUnionType
 |> FSharpType.GetUnionCases
 |> Array.map( fun case -> (case |> caseToTuple) =~= targetUnionType )
 |> Array.exists( fun result -> result )
 else
 raise( new InvalidOperationException() )

and SimpleTypeSourceSimilarToUnionTypeTarget (testSimpleType:Type) (targetUnionType:Type) =
 if (not (testSimpleType |> FSharpType.IsUnion))
  && (targetUnionType |> FSharpType.IsUnion) then
 targetUnionType
 |> FSharpType.GetUnionCases
 |> Array.map( fun u -> u |> caseToTuple )
 |> Array.map( fun targetTuple -> testSimpleType =~= targetTuple )
 |> Array.exists( fun result -> result )
 else
 raise( new InvalidOperationException() )

and SimpleTypeSourceSimilarToSimpleTypeTarget (testSimpleType:Type) (targetSimpleType:Type) =
 if (testSimpleType |> FSharpType.IsTuple) && (targetSimpleType |> FSharpType.IsTuple) then
 let testTupleTypes = testSimpleType |> FSharpType.GetTupleElements
 let targetTupleTypes = targetSimpleType |> FSharpType.GetTupleElements
 if testTupleTypes.Length = targetTupleTypes.Length then
 let matches = Array.zip testTupleTypes targetTupleTypes
 |> Array.map( fun(test,target) -> test =~= target )
 not (matches |> Array.exists( fun result -> not result ))
 else
 false
 else
 testSimpleType = targetSimpleType

and (=~=) (testObject:obj) (targetType:Type) =
 let objIsType (o:obj) =
 match o with
 | :? Type -> true
 | _ -> false

 let resolveToType (o:obj) =
 match objIsType o with
 | true -> o :?> Type
 | false -> o.GetType()
 let testObjectIsAType = testObject |> objIsType
 let testObjectTypeIsUnion =
 match testObjectIsAType with
 | true -> testObject |> resolveToType |> FSharpType.IsUnion
 | false -> false
 let targetTypeIsAUnion = targetType |> FSharpType.IsUnion 

 let resolvedType = testObject |> resolveToType

 match testObjectIsAType,testObjectTypeIsUnion,targetTypeIsAUnion with
 | false, _, _ -> resolvedType =~= targetType
 | true,true,false -> UnionTypeSourceSimilarToTargetSimpleType resolvedType targetType
 | true,false,false -> SimpleTypeSourceSimilarToSimpleTypeTarget resolvedType targetType
 | true,true,true -> UnionTypeSourceSimilarToUnionTypeTarget resolvedType targetType
 | true,false,true -> SimpleTypeSourceSimilarToUnionTypeTarget resolvedType targetType

Getting this to work seemed harder than it should.  While my tests pass, I am sure there are both cases I have not yet covered, and probably some simpler ways I could accomplish some of the same goals.

While this is a work in progress, if anyone has any thoughts for simpler ways to do something like this, I am all ears.

Using Dynamic with C# to read XML

On April 10th (less than 1 week away), I am doing an updated version of my talk at Twin Cities Code Camp about using dynamic with C#.

One core technique I am seeking to demonstrate is to use the concept of a dynamic XML reader as a more human readable way to use XML content in from C# or any other dynamic language.

Consider the following usage scenarios:

http://pastie.org/904555

What we would like is an object that will use dynamic in C# to make it so that we can read XML without having to think about all the nasty mechanics of search, XPath, and other stuff that isn’t “I am looking for the foobar configuration setting” or whatever it is we are looking for in the XML we want to look at.  The following is the basic spiked implementation:

http://pastie.org/904557

The real hard part was figuring out the mechanics of how DynamicMetaObject actually does it’s work.  Doing a dynamic object, if you are not going to do it the easy way and simply inherit from dynamicobject, means you are going to write two classes:

  • Something that implements IDynamicMetaObjectProvider
  • Something that inherits from DynamicMetaObject

The job of IDynamicMetaObjectProvider – at least as far as I can tell – is simply to point to the right DynamicMetaObject implementation, and somehow associate something with it that will potentially drive how the dynamic object will respond to various calls.  Most of the interesting stuff happens in DynamicMetaObject, where we get to specify how various kinds of bindings will work.

In a simple case like this where we are doing everything with properties, we merely need to override BindGetMember.  The return value of BindGetMember will generally be another DynamicMetaObject derived class.

Generally, DynamicMetaObjects take three parameters on construction:

  • A target expression
  • A binding restriction (something that tells the object a little about what it has – not entirely sure why…)
  • The actual thing to pass back

In the way I am using this, there are three main ways we return.  If we find we can resolve the property call to be a string, we are just going to wrap the string in a constant expression, specify a type restriction on string, and send that back.  If we are resolving to another XElement that has sub-items - we are going to wrap the XElement into a new DynamicXmlMetaObject – as a means to allow for further dot notation to get at sub-elements.  Lastly, if we have a group of the same item, we are going to return an array of either wrapped strings, or wrapped DynamicXmlMetaObjects.  Managing through these three cases is where most of the complexity is.

This is a work in progress – and I have already been told by some that this is a bad idea (i.e. “why not XPath?”, “that looks dangerous”, and “what if the Republicans use this?”).  But certainly, for certain kinds of problems, I can definitely see myself using this kind of thing to remove lots of Linq to XML plumbing code! (note, some work to integrate this and perhaps somehow combine this with XPath queries will probably happen).

Follow

Get every new post delivered to your Inbox.

Join 29 other followers