Traits of Analytics Led Companies

Imagine you are the CEO of a retailer. The economy is roaring, people are starting to shop more at your higher end shops, increasing margin. Quarter over quarter top line revenue growth is coming in, albeit at a slow, measured growth rate. You are even getting some level of margin growth by implementing some measures to remove excess cost out of the business. Things are going great!

Then, some year, everything changes. A new competitor emerges. Similar brand strength, similar cost structure, even similar locations. However, they seem to get twice as much revenue per square foot, based on some new traffic pattern analysis that they are doing. And they seem to have very savvy staff who are empowered by their systems to recommend products for customers that they seem to actually want to buy. This, when combined with the investment they have made in omni-channel customer experience, is allowing this new upstart to drastically cut into your marketshare.

You are paralyzed by fear. You’ve read a few articles about big data, but given you are a brick and mortar retailer focusing on the high end, you did not think this was as much of a priority. You’ve been focused on cost control and squeezing an extra 10 basis points of margin out of your existing model, and your competitor has leapfrogged you by creating an entirely new model that increases margin 2%. In retail. Where such a margin increase can mean profits rise by 40%.

You used to be aware of analytics. Now you need to lead the charge to be analytics led, before your competitor ends up beating you so badly that you become an aquisition target – with your declining brand and your real estate becoming the only assets that remain. You are faced with a new imperative – how do I drive this organization to become analytics led, and in a way where I can “re-leapfrog” this new upstart competitor?

Becoming an analytics led company – a company that drives strategic advantage through analytics – is a journey that requires rethinking of how your entire business operates. It requires agility to be able to change tactics and strategies in response to data coming from how your customers interact with your products, retail locations, and your brand in general. You will not get there overnight. There are, however, traits that such companies share – traits that you can use as a marker to know whether you are at least on the way to becoming such a company.

Contextual Intellectual Capital is Valued

For purposes of this discussion, contextual intellectual capital is the sum of learning from analytics that has taken place and exists in the minds of people actively involved in shaping the business. For example, a tuned collaborative filtering model that forms the basis of a recommendation engine, paired with data scientists who know how to evolve the model, could be considered such capital. It is contextual, because it’s value is derived from properties unique to the company – the brand, the people, the culture, the customer base. Even if you “copied the code” to a different company, it’s value would deteriorate, because it is optimized for that particular brand.

Analytics led companies have a great deal of contextual intellectual capital. It is the models, the learning, and the people who know how to leverage and improve the model. It is the ability to create new models in response to changing business conditions – that build on what is learned from prior models whose value is derived not just from the math, but the context they come from.

Driven by Data, But Intuition Still Matters

One of the more surprising things you find in analytics led companies is that, while they are naturally driven by data, as you would expect them to be, they do not completely discount intuition. Intuition is a unique capability humans have for processing lots of complex information from diverse sources in parallel. This ability is something that humans excel at much more than computers do.

What does this mean? A human will have the context to intuitively know, based on data on whether a model has worked (or not) how a model may need to be tweaked. Intuition will give us a safe harbor to know when results from a model that is new should be called into question. For example, if a new model for managing up-sell recommendations is having great results well beyond what it should, intuition will tell us to look at the data closer so we can know whether some one time black swan style event is influencing the recommendation.

For example, during a cold snap, more people may buy face-masks that cover your entire face while they are buying new winter coats – but that does not imply that such face-masks should always be up-sold, as this condition only exists perhaps when the temperature goes below 10 degrees Celsius. Models running 100% unmanaged, if they are not including weather in the analysis, would not pick up this detail, where a model combined with intuition is much more powerful.

Optimized for Learning

Analytics led companies usually think quite a bit differently about what the source is for sustainable value. At first glance, a competitor may think it is the presence of a killer model that tells them what customers want to buy with incredible accuracy. However, as valuable as that is, in a competitive market, competitors will quickly figure out ways to reverse engineer and replicate the model. It is not a static model that really provides the value in an analytics led company, it is the organizational capability to learn from and quickly adjust the model to changing conditions.

Imagine you are a retailer who has been selling products in primarily western economies for the last ten years. As you move into emerging markets, how does the system that evaluates product mix change? Organizations that do not have agility to update and evolve models risk using inappropriate models for new conditions and situations. Companies that are optimized for learning can quickly adjust to new realities and change models to meet new business conditions.

Powered By Science

Is there any type of science that isn’t driven by data? Analytics led companies understand that data science is really *business science*. Such science has a process, and that process is the scientific method. You form hypotheses, you test them, and if they fail, you learn and move onto the next hypothesis. Data science as a term may be a fad, but the scientific method, and application thereof to business, is most certainly not.

Part of science, of course, is embracing failure of models, even if the ideas behind them come from a high ranking executive. Proving that models don’t work, in science, is just as important as proving that they do. In analytics led companies, while intuition matters (see above), when a hypothesis is falsified when under test, the outcome is accepted in favor of alternative hypotheses. The rank of the idea’s progenitor does not apply.

Practitioner Driven Tool Choice

In analytics led companies, the CIO does not buy analytics tools based on conversations that occur on a golf course. While tools are used, tool choice is vetted by the data science team.

Thankfully, most data scientists tend to be very pragmatic about tool choice. The tools of choice these days tend towards free or open source when possible – things like Hadoop, R, Python, and related libraries. Paid, proprietary tools have their place in certain situations, but the defaults tend to be tools that lower the cost of experimentation, so that too much capital does not get spent in yet to be unproven ideas. Nobody wants to invest seven figures in tooling for a model that may not work – for at some point, too much investment in an unproven model will create pressure to “make it work”, even if it turns out to be wildly wrong.

Analytics Driven Strategy

The most important trait of analytics led companies – above all – is that there is confidence that science properly applied to business has potential to deliver breakthrough value. The executives have not only seen it in competitors or upstarts encroaching on their turf, but they are prepared to compete by executing an analytics strategy that plays to their own strengths.

Does this mean that brick and mortar retailers all replicate the strategy of Amazon? Of course not. It means a contextual strategy that plays to the retailer’s strength. If it is a retailer that has strong location coverage in certain kinds of communities, the analytics strategy will consider that. If it is a retailer that has a brand that appeals to a different kind of consumer, it will consider that as well. They deeply understand that context matters – and that the unique combination of analytics model, people, corporate culture, and brand for the basis of a successful analytics strategy.

The Analytics Maturity Spectrum

There is no doubt “Big Data” has taken the tech world by storm. I have spent much of 2013 talking about analytics and data science with people all around the US, going to conferences like Strata, and immersing myself in this world for the last 12 months. Over the course of this journey, I have started to notice some patterns about how various people in various kinds of organizations understand and invest in analytics.

The analytics led company is a concept I will define here as a company that seeks to use analytics (predictive, prescriptive, or descriptive) as one of their chief competitive weapons. The canonical example is Amazon, whose use of analytics is part of the DNA of the company. However, there are other more traditional companies that are analytics led, such as Walmart, Proctor and Gamble, Kolhs, and dozens of others.

In companies that are analytics led, analytics capabilities are spread throughout the company. They are not siloed off to some group in IT that does “analytics stuff”. Such organizations, knowing that analytics has to be a core competency of the company, invest in people – data scientists, data engineers, data savvy analysts and developers, and free them to use whatever tools and techniques are required in order to generate business results.

The next category in the continuum are analytics aware companies. These organizations see the competitive threat. Many may be piloting technologies or starting to do some discovery work in small areas. They see the value, but have not yet integrated analytics into the DNA of the company and made analytics something that would be considered business as usual. These organizations often have a siloed group doing experimentation, and this siloed group often has ties, or is directly part of, the traditional IT department.

 

Image

Further down the spectrum are analytics ignorant companies. When a company simply does not see the value of predictive analytics, and rather seeks to gain competitive advantage through other means. Finally, at the other end of the spectrum, are analytics hostile companies. They may have sought to use analytics and failed – and then soured on the idea. They may have a technology hostile culture in general. Regardless of reason, they make very good targets for analytics led companies that seek to steal marketshare.

From Analytics Ignorant to Analytics Aware

Most industries, though not all, have had the emergence of at least one new competitor who has used analytics to achieve some sort of competitive advantage. Whether it is organizations like Progressive Insurance who use analytics of how you drive via it’s Snapshot tool to allow for better underwriting, or its one of the many online and offline retailers who are using analytics to understand or predict customer behavior, if you are the CEO of any industry where one of these upstarts have emerged, you have likely at least made your executive team aware of the threat.

That said, in industries that tend to be less competitive, due to either higher barriers for entry or presence of a monopoly – the urgency for analytics is much less. These types of companies, utilities, some telecoms, and a few others, means that even if the potential for additional profit is there, the lack of urgent need tends to move analytics to the back burner. It is only when a competitive threat from a related industry emerges (i.e. Google cutting into the Yellow Pages revenue) that such organizations move from Ignorant to Aware.

Moving From Analytics Aware to Analytics Led

In analytics led companies, the approach towards data science will generally be to build the capability in house. Leaders of such companies understand implicitly that analytics is deeply business relevant. They know that predicting customer behavior and anticipating customer needs – and most importantly – connecting those insights to the rest of the business – drives profit margin, customer loyalty, and numerous other outcomes that are core to mission.

Analytics aware companies, on the other hand, will tend to know they need those outcomes, but do not have the capability to achieve them. They often try to achieve analytics by purchasing technology – usually applications that have some analytics capability. While this approach can help the company at least get to level compared to their peers, they do not allow a company to exceed very far beyond their peers, as if one company can purchase a product that does analytics, so can competitors. There may be a short term advantage, but it isn’t sustainable.

Some analytics aware companies may seek to purchase the capability either through acquisition – buying a company that is analytics led and hoping that the new company unit will enable the entire organization to also be analytics led. While such moves have a better chance of providing competitive advantage than buying a product, it is risky, as this pattern tends to lead to siloed analytics capabilities within a business unit that used to be the old acquired company, unless the acquisition is properly integrated (which seldom happens).

The Opportunity at “Analytics Aware”

Data science will obviously be more valued in organizations that are analytics led. However, the most interesting opportunities for change tend to be in the organizations that are analytics aware. The analytics aware organization is the class where the value is understood, but a brand new culture about how to leverage data in new and interesting ways can be fostered. In analytics led organizations, especially ones that have been analytics led for quite some time, certain conventional wisdom may already be in place about what is possible and what isn’t. Often, such “wisdom” constrains the idea-space, causing the most ambitious ideas to sound too big, audacious, or disruptive to be viable investments.

On the other hand, analytics aware companies have experience spending large amounts of money on product and acquisitions. Such costs tend to dwarf what the cost of a competent data science team would be. One can take the budget that is spent on tools and acquisitions, redirect it towards an innovation lab that serves business line leaders, and get a far superior return on the investment.

What is the takeaway of all this? Do not despair if you are not analytics led…. yet. Use it as an opportunity to redefine the kind of analytics that your organization will use, an opportunity to chase more audacious ideas than than people with an abundance of conventional wisdom would ever consider.

Hype, “Big Data”, and Towards a More Pragmatic Analytics

According to Industry Pundits™, IT departments worldwide will be spending an amount of money equal to the GDP of several midsized countries on Big Data. As someone who has been around the block a few times, I have seen many hype cycles come and go. But there is something truly staggering about the hype around Big Data these days. It is enough to make you question the whole thing – or at least wonder just how deep the inevitable trough of disillusionment is going to be!

Is It Big Data, or Big Hype?

There is, of course, a kernel of truth to the hype. Much of the most interesting work in tech in general – and the startup world in particular – is occurring in what can vaguely be called the “Big Data” space. Amazon, Facebook, and Google, three of the so called “four horsemen of technology” (the fourth being Apple) drive their profitability via Big Data. As these companies lead, it isn’t just other startups following, but corporate IT departments are suddenly are looking at their long running “Business Intelligence” initiatives and wondering why they are not seeing the same kinds of return on investment. They are thinking…  if only we tweaked that “BI” initiative and somehow mix in some “Big Data”, maybe *we* could become the next Amazon.

Sadly, many such initiatives are doomed to failure. Many companies will task IT with coming up with a big data initiative, but won’t really involve the business at all. Many of these cases will result in IT going out and buying a product, installing it on desktops (or perhaps even tablets), and subsequently declaring victory. Of course, the value from this activity is dubious at best, typically resulting in lots of licenses acquired that, ultimately, sit on a shelf and seldom get used.

That might be bad, but there are worse things that can happen. There will be others that will go forth and try to build a comprehensive platform. Because IT often works in isolation, without a specific business problem to work on, the urge is often to try to build a solution that a theoretical business person can use to do analytics in a generic sense. There are, of course, serious problems with this approach:

  • Tendency to spend years “perfecting the universal platform”.
  • Building a platform that, in an attempt to be usable by a general business user, dumb things down and don’t deliver sufficient value.

Without a specific business problem to focus on, not only do you build more platform than you need, but you tend to build a platform that lacks the depth needed to solve the kind of problems you solve with modern analytical tools. Let’s face it – most business users are not experts in how to use Monte-Carlo simulation, neural networks, or other tools that are in the domain of the data scientist.

Want To Do Analytics? Start with a Business Problem!

It seems like rather trite advice, and I am loathe to say never very often, but I can say with certainty that you should never build out a Big Data or Analytics initiative without a specific business problem in mind. Sure, there are lot’s of people who would love to raid your corporate treasury in order to build out the “one platform to rule them all” – or even just sell you a bunch of shelfware. But that doesn’t actually do what you are there to do, which is get results, not buy software.

Our advice is at ThoughtWorks is, unambiguously, start small. Every good analytics problem has an underlying analytics question – something like “given what we know about a customer, how likely are they to leave for a competitor?” or “given a set of transactions, what is the likelihood of fraud?”. We then look at what data sources – some conventional, some far from conventional – are available to help solve the problem. We do discovery work in the data, and proceed to work up a hypothesis about how we can use said data to predict something about the customer, transaction, or other subject of interest. And then we test our hypothesis. And if the test works out, we move on and find a way to operationalize our finding for the benefit of the customer. Or if not, we seek the next hypothesis and try again.

The key to success in any agile process is the feedback loop. The problem with BI and even many incarnations of more modern analytics initiatives is the distinct lack of feedback that occurs when you build a platform before you solve a problem. The reason we call this Agile Analytics is because we use these feedback loops – and the learning associated with them – to guide our efforts.

Data Science versus Data Voodoo

Of course, a feedback loop doesn’t guarantee results. The work, the science, has to be solid as well. The tools of yesterday, doing things like building data cubes to slice and dice data, might be good at telling you what happened. They do little, however, to tell you meaning behind data, or to be useful for predictive use. For this, we bring in data scientists, often people with PhDs in mathematics, physics, or related fields, to develop predictive algorithms. The kind of people who developed things like modern spam filters that predict an answer to the question “is this email likely to be spam” using Bayesian classifiers.

The tools of the data scientist are vast indeed. Neural networks, machine learning, natural language processing, and more are among the techniques such people often use when developing analytics solutions. More importantly, they know when to use these tools, and they know when simpler tools will suffice – and when a pivot from one technique to another is needed if a hypothesis does not work out. It is the data scientist, not the tools themselves, that makes Agile Analytics possible. This is why we cringe when we see products advertised that claim to allow end users to bypass the data scientist and allow users of all proficiencies to, say, apply neural networks to data sets. We would feel the same way about do it yourself surgery kits!

Scaling Based on Results

The pundits are right about the potential around analytics and big data. But they frequently understate the risks of wasting money by engaging on long running programs that invest far too much prior to seeing results. ThoughtWorks took up Agile because we saw companies engage in massive waste by investing money into projects for years before results are realized. Analytics is no different. In fact, given the amount of hype around analytics, it is even more prudent you demand results before you scale up investment.

Our call to action is to start with discovery. We are happy to help with this, but even if we don’t, we believe starting these initiatives using small teams of less than six people. Start by establishing an analytics question, engage in a short discovery phase – typically around 3 weeks – to form and test your hypothesis, and based on your results, go from there. When Google got started, they certainly did not start with a seven figure budget and some enterprise software package promising to “index the internet”. Neither should you, as you engage in your analytics initiative.

On (not) Being Post-Technical

In ThoughtWorks, one of the most poignant insults one can throw at you is “so-and-so has gone post-technical”. This usually means one has entered the land of management, that place where you give back your brain in exchange for money (or prestige.. or nothing, as it turns out).

Lately, as I have taken on additional roles at ThoughtWorks, the temptation to go “post technical” has put itself forward. Imagine not having to think anymore. Imagine being able to just focus on “strategy” and “people issues”, without all that hard technology stuff.

I could take that path, but I’d rather not…

I remain technical. My day-to-day job may involve things like Statements of Work and such, and I do not write code 100% of the time, but I am making a decision to at least remain somewhat involved in the technical communities in which I have interest.

Of course, my time is more limited, given I have taken on some management responsibility lately. So I can’t pursue everything. And I do have to admit that with my reduced time, I am likely never to be the most “technical person” in any given group of TWers. In fact, given that TWers are all generally really good at their jobs, it would be utter arrogance for me to assume I could keep up with them when I only act in a technical role for 40% of my time. So my bar isn’t TWers – they will most likely all be more technical than I am. But it is remaining competent enough to understand at least 40% of what they talk about, and more importantly, remain excited about technology.

So what am I excited about these days, here is my own, personal, “tech radar” (no fancy graphics required):

  • Functional Languages and Big Data

Someday, the world will catch up to where the F#/Clojure people are now. I think the tie between FP and Big Data is strong, and given that is where much of the value will be created in the next 5 years with corporate IT spend, the use of FP will only increase.

  • IaaS (note, I don’t say cloud… too overloaded)

I got the cloud bug this year, and I will admit, I am a terminal case. I saw an app where someone deployed a load balancer with an http request. And not an http request that converts into an email that goes to a tech that puts a physical one on a rack. Infrastructure as code is changing this business. Imagine the ability to specify, in human readable code, an organization’s entire server configuration. And realize that configuration by executing the DSL it is written in. That day is approaching, fast. Imagine the possibilities – from everything to devops to disaster recovery.

Why don’t I say cloud? The term is so overloaded that it is meaningless. Anything that runs on the internet is putting “cloud” in front of it’s name, which is why I have stopped using the term and do everything I can to use more descriptive terms.

  • Lean Startup

Though not a technology, it is the first thing from a project management standpoint I have been truly excited about in years. At least since Agile, maybe more so. Why? It finally closes the loop, bringing in the entire scientific method into the business of software development. It seems so obvious now, it never ceases to surprise me that it took our industry 50 years to adopt it. Lean Startup, to me, is the application of the scientific method to how you run business. I recently did a webinar that nicely encapsulates how I feel about it in much more detail, but my general sense is that this builds on CD and gives organizations that embrace it a much more repeatable, sustainable path to successfully delivering business results with software.

The three things above are the tools I believe will create most of the value in corporate software over the next 5 years. And I fully intend to remain competent enough to discuss these topics with both technical and business audiences, even if I don’t write code every single day anymore. The trick, not being a full time software developer anymore, will be to:

  • Make sure I don’t stop doing some technical work – I still intend to pair program with the team when I can, as well as keep up with a selection of OSS projects I am involved with.
  • Continue to read new technical material – I am on planes a lot, should not be too hard. For example, I am learning Python now, despite that I don’t have a good reason to personally code in python.
  • Be aware – ultra-aware – that I am not the most technical person in the room. My team-mates will likely know more than me, and I will defer to them on technical arguments more often than not, especially if they have decent data on why a given technical decision needs to be one way or another.

Will this work? We will see. I am only in the starting stages of this journey – will be interesting to see the degree to which my technical skills atrophy from not being a full time developer anymore!

Why I Work At ThoughtWorks (and why you should too…)

Alas, we enter a new year, a time at which many people start thinking about opportunities. This was a place I found myself, three years ago today, when I made the decision that I wanted to work for ThoughtWorks. Three years, four countries, and several really great clients later, I still feel as good about ThoughtWorks as the day I made that fateful decision. This posts outlines why I am here, why I stay, and why you should consider a future here as well. I should make sure to say here, as always, these views are mine, and do not necessarily represent that of the company.

Reason #1: Lack of an “adult/child” dynamic between sales and delivery that is common at many firms

In many companies, particularly consulting companies, salespeople call the shots, have most of the respect, and take home most of the rewards. Delivery – the “programmers”, are the people who are to be managed, paid as little as possible, and controlled by management. You see this commonly in pay structures, you see this in who is listened to when it is time to try a new idea (i.e. think agile before it was popular), and you see this in who makes up the management of the company. Bluntly, what you tend to see in most consulting companies is that former salespeople make up most of the management – which helps contribute to this effect.

At ThoughtWorks, in my experience, the relationship between demand generation and supply is much more balanced. It isn’t always perfect – sometimes we may go too far to the delivery end of the spectrum here, to the chagrin of our demand generation people – but it is much better than the status quo most places, where salespeople are considered “the adults in the room”, with “resources” (aka delivery) to be managed. It has a much more egalitarian feel – which feeds into a greater sense of career satisfaction and engagement. It is not a mistake that this greater sense of engagement, frankly, is a driver of our growth.

Reason #2: We have a purpose

In most companies, consulting or no, the sole purpose of profit is to enrich shareholders. At ThoughtWorks, we have a greater purpose. When we say we want to change the world and make it a better place, we mean it. As a firm, we invest our energy, our time, and significant resources towards projects that serve the cause of social justice.

Why does this matter? I don’t know about you, but the idea that the profits that I help create for this company make the world a better place is a lot more compelling to me than the alternative, which is usually helping some rich dude buy a slightly larger yacht. It makes me more engaged in my work than I would otherwise be. And I can’t speak for everyone at ThoughtWorks, but this is true for scores of others as well.

Ironically, it is this greater engagement that increases our financial returns. Companies that lack this purpose driven engagement often have to use extrinsic means (usually cash) in order to try to achieve the same level of commitment. Because our people are more engaged, they are more likely to go above and beyond for our clients. This, in turn, drives greater financial success, greater technical innovation, and greater ability to do social impact work – starting the cycle anew. While ThoughtWorks isn’t Consultopia, a concept I described in my book about consulting, it is likely the closest company among large technology consulting companies you will find.

Reason #3 – We are truly transnational

For me, one of the chief reasons I joined ThoughtWorks was that I wanted to work for a truly global company that was global for the right reasons. There are many companies that seek to open offices in India or other offshore locales in order to do rate arbitrage. They will say they are opening offices in places like India to access the talent pool, but what they usually mean is that they need a means to do work at lower rates.

One of the things that impresses me about ThoughtWorks is that, as a strategy, when we open an offshore office, we make it a priority to find work in that market, and as a goal, make it so that market can be self-sustaining. Two years ago, we opened our first Latin American office in Porto Allegre, Brazil. Within a year, we already had clients in the Brazilian market. Our efforts in China over the years have also yielded “in China, for China” type work.

There are several benefits to this. A primary benefit is that we gain better geographic diversity in our client base. This diversity allows us to better weather the natural ups and downs that will occur in any given region. As places like Brazil, India, and China rise, we already have relationships with those respective business communities. This does not happen if you are primarily in the rate arbitrage business, as companies that do that tend to tie offshore revenue to first world economies – spending most of their energy doubling down on first world business relationships, rather than developing world relationships.

Why is this good for you, as a potential ThoughtWorker? First off, you get a chance to get involved in these emerging markets. Second, you have a better shot at getting exposed to these cultures – all of which add to your value as a consultant should you decide to learn how to work with such a diverse set of people. Third, you work for an organization that is sufficiently diversified that the potential rise of Brazil, India, and China will only create more opportunities for you, rather than fewer.

Reason #4 – We won’t do evil

As a company, we have strong values. Every client we take on, we vet to make sure we are comfortable with what they do as a business. This isn’t always easy, and there are definitely shades of gray. But chances are, if a company derives most of it’s revenue from activities related to war, uses it’s profits to fuel hate against marginalized minorities, such as blacks, women, or the LGBT community, or otherwise does not have aligned values, we will not work for them.

This does not come without cost. There have been times where our pipeline was light, and work was offered that would help the company through a slow period. And frequently, there are very vibrant conversations about exactly where to draw the line when something is a “shade of gray”. I am reasonably certain we do not always get it right. But compared to most companies, which, being polite, operate on the principle of ATM (Anything For Money) – ThoughtWorks is head and shoulders above the rest.

Reason #5 – We often work on the hardest, most difficult problems

When we engage, the stakes are usually quite high for the company engaging us. This provides a sense of meaning and purpose to the work that is compelling, frequently very technically interesting, and almost always very interesting from a social point of view. After approaching three years at ThoughtWorks, it has become rather obvious to me that many problems that are presumably software problems really have a corporate strategy or political problem at their core. This kind of work, very common here, helps you as a ThoughtWorker exercise skills that a pure software shop can’t offer – things like building coalitions, selling crazy ideas, and creating delivery climates conducive to innovation. ThoughtWorks is not just a great place to learn technology, agile, lean, or continuous delivery – it is also a great place to learn how to navigate the ropes that exist in large organizations so you can actually get things done.

It isn’t for Everyone

Look, ThoughtWorks isn’t for everyone. Most roles require extensive travel. And we have a pretty hard-core vetting process for new hires. But I believe that the opportunity to do amazing work – work that is literally “make or break” type work for many of our clients – in a manner that truly makes the world a better place – is worth it. It is a place where you will have deep respect as a technical person. It is a place that bears the costs of doing the right thing. It is a place that you will be proud to work for. If you would like to join us, please let me know, either directly (aerickson – at – thoughtworks.com), or by going to our new site for potential candidates at join.thoughtworks.com.

The Link Between Continuous Delivery and Agile

Now that Agile has passed the 10-year mark, many people are starting to wonder what the next step should be in the evolution of Agile. As we start to think about what’s next, it doesn’t hurt to think for a moment about how we got here in the first place. As the saying goes, it’s hard to know where you’re going if you don’t know where you’ve been!

So let’s turn back the clock through the mists of time in the years leading to the Agile Manifesto in 2001. Back when this movement started, many of the “reasons for Agile” just seemed very intuitive. People over process? Sure. Responding to change versus following a predefined plan? Common sense. On the surface, few people would disagree with the assertions made in the Agile Manifesto. But is that enough? Can you following the Manifesto—do it all by the book—and be guaranteed to create software that delivers economic benefit? No. By itself, Agile doesn’t lead to business value!

How Can By-the-Book-Agile Fail?

Let’s be clear. With its increased collaboration with the customer, more frequent releases, and increased engineering and testing discipline, Agile makes delivering value more likely. It’s certainly a vast improvement over multi-year “too big to fail” Waterfall software projects. But even if you do everything right, even if you have the best practitioners in the world building your product, you can still create a product that fails to make money.

Agile—and here we mean the kind that includes the right engineering practices, such as test-driven development (TDD), pairing, SOLID principles, automated acceptance testing, and continuous integration—will ensure that you create a product that works…in the lab, at least. However, despite your best efforts, you might build a very functional eCommerce site that tries to sell something that nobody really wants. Or you might build an internal application that can’t go beyond the lab because the operations people can’t support it in production. These issues—and many other things outside direct control of the team practicing Agile—can thwart even your best efforts.

Article continues here

Hourly Rates Considered Harmful

So here you are, on a project, and everything is going great. You are delivering value, your customer is happy, and all is well and right with the world. You are doing Agile – including the parts like pairing and TDD that lots of people pay lip service to but far fewer actually do. You even are getting to use innovative technology like F#, Clojure, JRuby, Hadoop, NoSQL, Knockout.js, or one of dozens of technologies that somehow manage to enhance your productivity. Awesome!

One problem though. Your rates are too high. Can you help us train this group over here to be like you? They work for 1/3 your cost, and don’t ask us lots of hard questions all the time.

Sigh.

Of all the ways to sell services of higher end technology consultants – if I had to choose a worst way – I think I would end up with hourly rates. It is the same measurement vendor management departments in most large companies use to “price” the services of said consultants.  Unfortunately, it is a number that, in isolation, tells you very little of importance, for two main reasons:

  • Measurement of effort at the wrong level of precision
  • Measurement of cost irrespective of value delivered

Measurement of effort at the wrong level of precision

How many people are doing estimates on a per hour basis? There may be a few, but I highly doubt there are people other than GANTT chart driven control freaks looking to normalize that kind of thing. The reality is that, as a developer, I have a mix of useless hours, and a few really good ones where the vast majority of the value get’s delivered. The whole hours thing is a vestige from the temporary employment industry where programming was considered a fancier form of typing. We need to recognize it for the vestigial tail it is.

What should replace it? I am a big fan of weekly or monthly rates when working under a time and materials contract. I don’t mind the idea of “per person-sprint” either – which fits nearly into the idea that we would rarely fund only a percentage of a given sprint or iteration. I am open to other models that match funding with the unit of work, but do so in a way that you can know in advance (i.e. I would reject per story on the basis of you can only estimate effort – and it brings on too many potential fights or negotiations about how big a story ended up being).

Measurement of cost irrespective of value delivered

The precision issue, however, isn’t the biggest issue. The biggest issue I have with hourly rates is that they are the denominator in a much more important number, that being ratio of value delivered to cost. This issue is an old one, well covered in Joel Spolsky’s post on the subject. The specific problem – however – for higher end consulting firms, is that rates tend to be higher. Without considering value delivered in part of the equation, rate conversations tend to always be self-defeating, as it highlights the cost, without talking about the value.

The problem, of course, is that value delivered is often a squishy thing to measure, while you can read the cost per hour right off the invoice. To me, this is the biggest reason why Continuous Delivery is important, particularly the kind that has a feedback loop that measures financial performance – it allows us to actually measure the numerator in the value delivered equation. In the best of all worlds, we move to a funding model that funds the Minimum Viable Product up front, but then moves to a Evidence Based Funding model (aka “Continuous Funding”) based on value delivered via Continuous Delivery.

Something Has To Be Done

The funding model is broken. It has been for a long time, leading to sub-optimal results. Competition by rate irrespective of value delivered is a race to the bottom, leading to all sorts of bad outcomes for companies that get involved. Sadly, too many do. Continuous Delivery is a start, perhaps if we can get to Evidence Based Funding – we can actually sustain continuous delivery.

Everybody’s Doing Agile–Why Can’t We?

Have you gone Agile? What are you doing this year to become Agile? We must become Agile in the next three months! These days, it is not unusual to hear about executives wanting to do an “Agile Transformation” on the entire company. Who knew that a bunch of relatively obscure techies would create a movement that is now the lingua franca of executives who are attempting to turn around companies! Agile really has “come a long way, baby.”

It’s amazing how things change in ten years. Once considered a methodology preferred by software developers because it “helped them avoid having to do status reports,” Agile has now gone beyond its original remit as a software development method. The word Agile, in many places, has become nearly synonymous with the word Good. As flattering this must be to the original founders, it probably means it would be wise to think, very candidly, whether Agile—as in Agile Software Development, or a broader Agile Enterprise—is really something that your company can achieve.

Read the rest of this article at InformIT here.

Welcome to the Revenge of The Nerds Economy

Many of you will remember Revenge of The Nerds, that fine classic movie where a bunch of, well, nerds take over the campus of Adams college by outsmarting and outwitting the jocks. For people who work in computers of a certain age and disposition – say, a late 30s geek from a western culture like myself who might have fit the nerd stereotype at various points in his early upbringing – the movie was somewhat influential. It told a story that, translated into economist-speak, equates to “intellectual capital may someday trump other kinds in terms of economic value creation.”

As it seems to have turned out, we are now in the Revenge of The Nerds economy. Despite 9%+ unemployment in the general economy, overall unemployment for software engineers is a tad below 5%. But this data, which is compelling enough, does not tell the entire story.

The bigger story is around what kinds of things are seeing investment. There is talk of another bubble right now in technical startups. If you are a technology based startup that has reached “Ramen Profitability”, you will almost certainly attract capital. Things are now even getting to the point where we are seeing companies that lack profits going to the IPO market (think Pandora and Groupon) – something that has been out of vogue since at least 2001. One could make the case that we will soon cross the line where it is easier to get a startup funded than it is to get a jumbo mortgage.

If you work in tech, there is a good chance you are feeling this, at least if you are currently employed and working for an employer that has some level of visibility. If you work for Google, Facebook, or some other high tech company, you likely do not have to work that hard to get a new job offer. Notwithstanding the cruel irony that people who are unemployed are often being systemically discriminated against, the job market for really good, currently employed software developers is as robust has it has been in 10 years.

So if the nerds are all right, mostly, what about the jocks? What occupations did they end up in? While some of them may have made it to the NFL as professional football players, most others end up in spilling into the general job market. While the stereotype is that jocks end up in menial jobs, construction, or manufacturing; research being hard to find, my own experience of knowing several such folk points to careers tending to be sales, low-end finance (think mortgages), real estate, and personal training. Given, a sample size of a couple dozen doesn’t really prove anything. However, if one did extrapolate, one could come to a conjecture that while “jocks” for lack of a better word, do not do worse then average now, the nerd/jock investment and employment dynamic has changed.

Why the change? Now that simply taking big leveraged risks with a big pile of money isn’t in fashion (i.e., as it was before the global financial crisis), you need advantages from superior intellectual capital in order to sustain a higher than mean return on investment. Why not in fashion? From 2003-2008, it was accepted that financial engineering was the way to riches. If you only structured your collateralized debt obligation in the correct way, you could invest at 40x leverage and still retain a AAA rating on your debt. Financial engineering gave you higher profits under such a regime than traditional “engineering engineering” could provide. So money flowed into CDS structures and away from the nerdy parts of the economy that invents things.

So now we find ourselves in an economy where capital flocks to things like Pandora, Groupon, Facebook, Linkedin, and other things that, at their core, have interesting algorithms inside them. And we have a situation where if you are a developer capable of writing an interesting and valuable algorithm – or helping a company scale out a system that leverages one of these inventions, you are in high demand.

Good thing for the nerds. What this means for everyone else remains to be seen.

Velocity 101: Get the Engineering Practices Right

IF one could equate faster typing with velocity, engineering practices perhaps would not matter in the world of software development productivity.  Thankfully, there are reasons that most organizations do not use Words Per Minute in our evaluation process when hiring new software developers.  Slamming out low quality code and claiming progress, be it story points, or merely finished tasks on a GANTT chart, is a fast track to creating boat anchors that hold back companies rather than pushing them forward.

Without proper engineering practices, you will not see the benefits of agile software development.  It comes down to basic engineering principles.  A highly coupled system – be it software, mechanical, or otherwise, provides more vectors over which change in one part of the system can ripple into other parts of the system.  This is desirable to a degree – you need your transmission to couple to the engine in order for the engine to make a car move.  But the more coupling beyond the minimum you need in order to make the system work, the more the overall system destabilizes.  If the braking system suddenly activated because a Justin Bieber CD was inserted into the car stereo, you would probably see that as a pretty bad defect.  And not just because of the horrible “music” coming out of the speakers.

So what are the specific engineering practices?  Some are lower level coding practices, others are higher level architectural concerns.  Most rely on automation in some respect to guard against getting too complacent.  While I generally loathe to use the term “best practices” due to the fear that someone might take these practices and try to apply them in something of a cargo-cult manner, these are some general practices that seem to across a broad section of the software development world:

Test Driven Development and SOLID

While detractors remain, it has ceased to be controversial to suggest that the practices that emerged out of the Extreme Programming movement of the early 2000s are helpful.  Test driven development as a design technique selects for creating decoupled classes.  It is certainly possible to use TDD to drive yourself to a highly-coupled mess, given enough work and abuse of mocking frameworks.  However, anyone with any sensitivity to pain will quickly realize that having dozens of dependencies in a “God” class makes you slow, makes you work harder to add new functionality, and generally makes your tests brittle and worthless.

To move away from this pain, you write smaller, testable classes that have fewer dependencies.  By reducing dependencies, you reduce coupling.  When you reduce coupling, you create more stable systems that are more amenable to change – notwithstanding the other benefits you get from good test coverage.  Even if you only used TDD for design benefits- and never used the tests after initially writing them, you get better, less coupled designs, which leads to greater velocity when you need to make changes.  TDD doesn’t just help you in the future.  It helps you move faster now.

Indeed, TDD is just one step on the way to keeping your code clean.  Robert Martin treats the subject in much more depth in his book, Clean Code.  Indeed, he calls all of us out to be professionals, making sure that we keep our standards and don’t give into the temptation to simply write a bunch of code that meets external acceptance criteria, but does so at the cost of internal quality.  While you can, in theory, slap some code together that meets surface criteria, it is false economy to assume that bad code today will have a positive effect on velocity beyond the current iteration.

Of course, having good test coverage… particularly good automated integration, functional, performance, and acceptance tests, has a wonderful side effect of forming a robust means of regression testing your system on a constant basis.  While it has been years since I have worked on systems that lacked decent coverage, from time to time I consult for companies that want to “move to agile”.  Almost invariably, when I do this, I find situations where a captive IT department is proposing that an entire team spend 6 months to introduce 6 simple features into a system.  I see organizations that have QA departments that have to spend another 6 months manually regression testing.  TDD is a good start, but these other types of automated testing are needed as well to keep the velocity improvements going – both during and after the initial build.

Simple, but not too simple, application architecture (just enough to do the job)

While SOLID and TDD (or BDD and some of the ongoing improvements) are important, it is also important to emphasize that simplicity specifically as a virtue.  That is not to say that SOLID and TDD can’t lead to simplicity – they certainly can, especially in the hands of an experienced practitioner of the tool.  But without a conscious effort to keep things simple (aka apply the KISS principle – keep it simple and stupid), regardless of development technique, excess complexity creeps in.

There are natural reasons for this.  One of which is the wanna-be architect effect.  Many organizations have a career path where, to advance, a developer needs to advance to the title of architect – often a role where, at least it is perceived, that you get to select design patterns and ESB buses without having to get your hands dirty writing code.  There are developers who believe that, in order to be seen as an architect, you need to use as many GoF patterns as possible, ideally all in the same project.  It is projects like these where you eventually see the “factory factories” that Joel Spolsky lampooned in his seminal Architecture Astronaut essay.  Long story short, don’t let an aspiring architecture astronaut introduce more layers than you need!

It doesn’t just need to be a wanna-be architecture astronaut that creates a Rube Goldbergesque nightmare.  Sometimes, unchecked assumptions about non-functional requirements can lead a team to creating a more complex solution than actually needed.  It could be anything from “Sarbanes Oxley Auditors Gone Wild” (now there is a blog post of it’s own!) requiring using an aggressive interpretation of the law to require layers you don’t really need.  It could be being asked for 5 9s of reliability when you only really need 3. These kinds of excesses show up all the time in enterprise software development, especially when they come from non-technical sources.

The point is this – enterprise software frequently introduces non-functional requirements in something of a cargo-cult manner “just to be safe”, and as a result, multiplies the cost of software delivery by 10.  If you have a layer being introduced as the result of a non-functional requirement, consider challenging it to make sure it is really a requirement.  Sometimes it will be, but you would be surprised how often it isn’t.

Automated Builds, Continuous Integration

If creating a developer setup requires 300 pages of documentation, manual setup, and other wizardry to get right, you are likely to move much slower.  Even if you have unit tests, automated regression tests, and other practices, lacking an automated way to build the app frequently results in “Works on My Machine” syndrome.  Once you have a lot of setup variation, which is what you get when setup is manual, defect resolution goes from straightforward process to something like this:

  1. Defect Logged by QA
  2. Developer has to manually re-create defect, spends 2 hours trying, unable to do so
  3. Developer closes defect as “unable to reproduce”
  4. QA calls developer over, reproduces
  5. Argument ensues about QA not being able to setup the environment correctly
  6. Developer complaining “works on my machine”
  7. 2 hour meeting to resolve dispute
  8. Developer has to end up diagnosing the configuration issue
  9. Developer realizes that DEVWKSTATION42 is not equivalent to LOCALHOST for everyone in the company
  10. Developer walks away in shame, one day later

Indeed, having builds be automated, regular, and integrated continuously can help avoid wasting a day or five every time a defect is logged.  It should not be controversial to say that this practice increases velocity.

Design of Software Matters

Design isn’t dead.  Good software design can help lead to good velocity.  Getting the balance wrong – too simple, too complex, cripples velocity.  Technical practices matter.  Future articles in this velocity series will focus on some of the more people related ways to increase velocity, and they are certainly important.  But with backward engineering practices, none of the things you do on the people front will really work.

Follow

Get every new post delivered to your Inbox.

Join 27 other followers