Bespoke software is expensive. As we all well know, it is risky to build, technical debt can easily creep in, and you can easily end up with a maintenance nightmare. And software developers, well – we all know they are hard to work with, they tend to have opinions about things, and did I mention, they are expensive?
The argument has always been that with purchased software, you get an economy of scale because you share the software with others. Of course, this works out well most of the time – nobody should ever be developing their own internal commodity software (think operating systems, databases, and other “utilities”).
However, not all software is “utility”. There is a continuum of types of software, going from something like Microsoft Windows or Linux on one end, which nobody in their right mind would write, and company specific applications of all kinds that have zero applicability outside of a given, well, “Enterprise”. The software I am talking about in this post lies somewhere in the middle of these extremes.
Almost anyone who does work in corporate IT has probably encountered one of these systems. The following traits commonly pop up:
- It is oriented at a vertical market. The number of customers is often measured in 10s or 100s.
- The cost for purchase is usually measured with at least 6 figures in USD.
- It usually requires significant customization – either by code, or by a byzantine set of configuration options.
- It was almost certainly sold on a golf course, or in a steak house.
- You usually need their own consultants to do a decent installation. The company that sells the software has professional services revenues at or higher than the software license revenues.
It is my observation that software in this category almost always is loaded with technical debt. Technical debt that you can’t refactor. Technical debt that becomes a permanent fixture of the organization for many years to come. Enterprise Software – especially software sold as “Enterprise Software” to non-technical decision makers, is more often than not, a boat-anchor that holds organizations back, adding negative value.
Why is this? Enterprise software is often sold on the basis of flexibility. A common process, sadly, in the world of package selection, is to simply draw up a list of features, evaluate a set of vendors on the basis of desired features, and balance that against some license cost + implementation cost threshold. Lip service is given to “cost-of-ownership”, but the incentives in place reward minimizing the perceived future costs. What this process selects for is a combination of maximum flexibility, moderate license cost relative to a build (but often high), and minimized estimates of implementation cost. Even if one company bucks the trend, the competitive landscape always selects for things in this direction.
Why is that true? We don’t assess the technical debt of enterprise software. I have seen a lot of buy versus build analysis in my years as a technology consultant, and not once did I see something that assessed the internal quality of the solution. Enterprise software is bought based on external features, not internal quality. Nobody asks about cyclomatic complexity or afferent coupling on the golf course.
Does internal quality of purchased software matter? Absolutely. In spades. It is hardly uncommon for companies to start down a path of packaged software implementation, find some limitation, and then need to come to an agreement to customize the source code. Rarely does anyone have the intent to take on source when the software is purchased, but frequently, it happens anyway when the big hairy implementation runs into difficulty. But even if you never take possession of the source code, the ability for you to get any upgrades to the solution will be affected by the packaged software vendor’s ability to add features. If the internal quality is bad, it will affect the cost structure of the software going forward. APIs around software that has bad internal quality tend to leak out that bad quality, making integration difficult and spreading around the code smells that are presumably supposed to be kept “inside the black box”.
What is the end result? Package implementations that end up costing far in excess of what it would have been to build a piece of custom software in the first place. Lots of good money thrown after bad. Even when the implementation works, massive maintenance costs going forward. It gets worse though. The cost of the last implementation often colors the expectations for what the replacement should cost, which tends to bias organizations towards replacing one behemoth nasty enterprise software package with something equally as bad. It is, what the French like to call, a fine mess.
So what is the solution? We need to change how we buy enterprise software. The tools we have for buy versus build analysis are deficient – as few models include a real, robust cost-of-ownership analysis that properly includes the effects of insufficient internal quality. It is amazing that in this day and age, when lack of proper due diligence in package selection can cost an organization literally billions of dollars, that so little attention is paid to internal quality.
What would happen? There would be a renewed incentive to internal quality. Much of today’s mediocre software would suddenly look expensive – providing room for new solutions that are easier to work with, maintain, and provide more lasting business value. More money could be allocated to strategic software that uniquely helps the company, providing more space for innovation. In short, we would realize vastly more value out of our software investments than we do today.
7 thoughts on “The “Dark Matter” of Technical Debt: Enterprise Software”
[…] The “Dark Matter” of Technical Debt: Enterprise Software The problem is that most “enterprise” software is just extraordinarily monolithic. What’s worse is that many organizations think this monolithic quality is a feature. Leave a Comment […]
I have seen a rather toxic trend emerging out of our latest ‘enterprise software’ inventory/work/document management systems. There is an environment in which it is perfectly acceptable to blame the users of an application for exposing particularly glaring bugs in the web application software that has made its way over into our own internal IT leadership from the enterprise software provider.
I don’t fully understand it but I feel like our own leadership has resigned to the fact that they are stuck with this highly unstable software environment in which there is no way they will have their concerns addressed appropriately for any reasonable cost.
Users causing major system instability by being so forthright as to bookmark a particular page within a webapp? Reinforce new policy of ‘no bookmarking pages within web application’ with some sternly worded emails and management alignment! This is especially bothersome after expending a great deal of effort on internal applications that actually conform to expected webapp behaviors providing meaningful bookmarks and history within an AJAX context.
Users open up an extra tab for the web application within their browser and all of a sudden the status of items in the first tab are having their status changed unintentionally? Here comes another sternly worded email reinforcing instructions to users to never open up more than one web browser instance or tab for this web application.
This application was supposedly well-established in the industry but these are just two examples within the first few months of our migration where zero concern is given to deficiencies within the application and 100% blame placed on the users. So is leadership just getting ready for an extended conflict with users as they become even further invested in this solution?
I’ve worked quite a bit in this world, on both sides of the table, and can offer a few (hopefully relevant) thoughts.
Firstly… there is an amazing variety in how companies operate; and companies generally want/need the software they use to accommodate their business process. Therefore, the flexibility you speak of is not specious; it is necessary and valuable in these markets.
Aside #1: The difference between a need and a want, is only that companies satisfy their needs first. From the point of view of a vendor, you aim to satisfy as many of both as possible, to win the customer away from your competitors… so there isn’t really much of a difference.
Aside #2: There are some famous large cases, such as SAP, there companies adopting the software often find it necessary or desirable to also modify their business processes considerably, to accommodate the software.
Secondly… in competitive software evaluations, there is a near-universal phenomenon in which at first look, numerous options are on the table, each of which appears to be a good option. But upon digging in, it turns out that most of them are wholly unsuitable for one reason or another. By the time the end of the evaluation work comes around, there are only a handful of choices still in the running, each of which has some major downsides.
I agree with you 100%, that the internal quality of purchased software is extremely important. But there are already so many decision criteria overwhelming the decision space, that it is often impossible to convince buyers to assign much weight to internal quality.
It’s struck me in the last few years that part of the problem is an unwillingness of the consumers of these applications to embark upon the 6 months or so of work required to elucidate requirements/features and build up the capability of the internal solution. Easier option is buy a package, take whatever features it has and use it. It’s attractive because (while I’m convinced it’s more costly) it’s simpler to go through.
Part of the problem, frankly, is that the people who are affected by the software and who have to get productivity from the software have no voice. And then we wonder why people hate IT and believe IT to be a giant black hole that sucks in capital and generates no return at all.
There should not need to be a 6 month process to “define” a set of “features”. Here is a hint: any solution that takes 6 months to define almost certainly does too much at once. Frankly, any solution that takes 6 months to *build* without getting results at some point prior to that point in time probably does too much.
Predictably, this shape of this competitive landscape engenders bad code quality; the actions within the vendors orgs are a function of the selection process. A former colleague related the following anecdote to me. After receiving yet another death march deadline for a completely under-specified feature for their offering, he happened to pass by the CIO. My former colleague’s curt response to the usual pleasantries resulted in a frank exchange wherein the CIO explained that whenever a competitor announced a new feature, they immediately announced the same feature. They found that very few customers actually used these new features, but they would lose some customers just because they didn’t have them. Their philosophy was to put the least amount of investment and get it out as quickly as possible. If customers should begin using it, then they would invest in improving it.
As Joel Spolsky has remarked, nothing sells more software than a new version. Alan Cooper once blogged about a washing machine (as I recall) that correctly and automatically selected all the appropriate settings for it’s load, but it didn’t sell because there weren’t enough knobs and lights–it was perceived to be deficient in features.
It’s a naive consumer mindset problem, at some level. Perhaps we should engage professional buyers.
Can TCO and extensibility be effectively marketed as “killer” features? What if a vendor leveraged their agility to give away a certain amount of customizations? FREE is a powerful motivator.
How does the Innovator’s Dilemma play out in this competitive landscape? Isn’t this where the cloud vendors come in? Werner Vogels recently tweeted that vendors continue to sell complexity–are we witnessing a tipping point?
Your story was really inmfoartive, thanks!