The Unheralded Benefits of the F# Programming Language

As many long time readers know, I am an enthusiast of the F# programming language.  I make no apologies for the fact that, if you are developing software on the .NET platform, F# is one of the better choices you can make for numerous reasons.  It is one of the reasons I proudly contributed as a co-author to the book, Professional F# 2.0, which is being published by Wrox in October.

Some of the oft cited benefits of F# are that, to distill them quickly, it is good at doing intensely mathematical operations, it is built for parallelism, and it is good at helping define domain specific languages.  Those benefits are so often cited by speakers on the F# speaker circuit that they pretty much seem cliche to me at this point (note, yours truly is proud to call himself a member of said circuit, and often gives this talk!)  As great as these features are, there are a couple features, that in my more mundane F# experiences, seem to stand out as things that “save my ass”, for lack of a better phrase, more often than not.

Advantage 1: Near Eradication of the Evil NullReferenceException

The first feature I am most dutifully grateful for is that when working with F#, you almost never deal with the concept of null.  A side effect of being “immutable by default” is that the pattern of “lets leave this thing uninitialized until I use it later, then forget I didn’t initialize it when I try to use it” mostly goes away.  In other words, immutable code almost never has a reason to be null, therefore, no null pointer exceptions if you stick to immutable structures.

How often do you see this pattern in a legacy code base:

if (thisThing != null && thisThing.AndThatThing != null)

… or worse, deeply similar nested examples?  When I am doing code archaeology, sometimes even on more recent code bases, I usually spot this kind of code as places where:

a.) Some very careful programmer is doing null checks to make sure she does not get a null pointer exception

… or, more commonly…

b.) Someone fixed one or more NullReferenceExceptions they were getting.

The only time you routinely deal with the concept of null in F#, typically, is when doing interop work, likely someone else’s ill-initialized C# code.  Of course, one may wonder how one represents the idea of something actually being “missing”, that is, something roughly analagous to null, in F# code.  Well, that is where the option comes to the rescue.  If, for example, you have a concept of weather, you might have this:

type Weather =
Skies : Sky
Precip : Some(Precipitation) // Precip : Precipitation option will also work

In this example, weather will always have a sky, but only might have some precipitation.  If something is optional, you say so, which means that you can have the value of Precip either be Some(somePrecipValue) or None.  For C# programmers, this is roughly analogous to Nullable<T>, only it applies to objects, not just value types.  What this does is force the programmer to state which objects can be in a state of “absence” by exception.  In the same way that a database design becomes more robust when you make more of your fields non-nullable, software becomes more robust and less prone to bugs when fewer things are “optional” as well.

Advantage 2: Your Entire Domain in One Page of Code

The second advantage – at least in my mind, is that unlike C# and Java – the lack of syntax noise in F# means that nobody uses the “one class per file” rule that is conventional in the world of most mainstream programming languages.  The nice thing about this is that, frequently, you can put a reasonably complex entire domain model on one printed page of code.

One pattern I use quite a bit is that I put the set of relationships between types in one file, along with common functions between elements.  If I need to extend the domain in any way – such as adding later a conversion function from a domain to a viewmodel or something like that, I put that in a separate file, where I can write extension methods that adapt the domain to whatever I need it to be adapted to.

Extension methods?  What if I need to use a private member to extend the domain?  Well – that does beg a question – when do functional programmers use private members?  I can only vouch for myself, but seldom do I feel the need in F# programs to hide anything.  Think about why we ever need encapsulation – it is usually to stop outsiders from changing the member variables inside our class.  If there is nothing that varies – as is the case with a system built using immutable constructs, then there is less need for such encapsulation.  I may have a private member somewhere to hide an implementation detail, but even that tends not to be a mainstream case that I would ever use in an extension scenario (i.e. projecting a domain object to a viewmodel).

The overall advantage, in the way F# is written, is that you can have all of your related concerns on a single page of code in most systems.  This ability to “print out and study the domain in the loo” leads to a subtle, but important reason, why F# is good for expressing domains.

Is it for Everything?

No – not by a long-shot.  But more and more, I am seeing F# as useful for many things beyond the traditional math, science, and obviously parallel type applications that functional languages are traditionally considered useful for.  Specifically, the more I use it in MVC and REST style applications, the more it grows on me.  Especially when I am working with Java or C# code, and fixing someone else’s NullReferenceExceptions!

The Unheralded Benefits of the F# Programming Language

On Business Intelligence and F#

It is high time that Business Intelligence get the benefits of the language “Cambrian Explosion” and agile revolution.  Think about BI for a second.  Most of the talk around BI is oriented around tools – a stack that ties together presentation, storage, and logic, all in the name of avoiding dealing with pesky programmers.  To the point that “requires no writing code” becomes a feature point.  How did we get here?  And how do we get out?

In the olden days, if you wanted a report, leaving aside BI for the moment… you had to ask your IT department for the report.  You were on their schedule, and more often than not, the backlog was very long.  Leaving aside why this was the case (i.e.budget shortage, lack of IT/business alignment, etc.) – it was.  This begat two primary developments, the shadow IT department, and the market for tools that empowered the business to, at least try, to generate their own reports independent of IT.

Now, in the intervening years, these two developments have not really stopped at all.  There is still a ton of shadow IT, and a ton of tools that purport to help you generate business intelligence by using an integrated stack of tools that, in theory, allow BI to happen without programmers.  The question is… is this a good thing?

I would say no.  BI tools, more often than not, tie you to not only a platform, but frequently, a specific product.  You can’t take your BI developed in Microstrategy and run it in Cognos, at least not very easily.  And it makes sense why – as each of these tools competes on the basis of capabilities, and there is therefore no motivation to port the capabilities of one BI product over to another.  And because there is no obvious short term economic justification from the tool vendors point of view, it simply doesn’t happen.

Of course, the medium to long term economic justification for tool vendors for this is very good indeed.  By creating an ecosystem of BI that allows for greater innovation and better solutions, BI will receive much greater investment.  The savvy players who take advantage of this will do really, really well, just like Microsoft prospered by having an open PC platform and Google prospered by having an open internet platform.  What has to happen, however, is someone has to move first, and given the nature of the space – big corporate buyers – it has to be one of the big players to do it with any kind of credibility.

That said, it does not help that there has been little standards innovation in the world of SQL.  Not to say that it doesn’t happen, but lets put it this way – nobody is proposing SQL as a new .net language like they do for F#, Ruby, or even Boo.  SQL is just now standardizing how objects work… and worst yet, the language continues to get balkanized – especially in BI land where extensions for doing cubes and other specialized functionality tend to differ from vendor to vendor.

So how do we untie this gordian knot and get to a place where BI is portable, testable, and exists in a manner that allows diversity in authoring tools, persistence mechanism, and presentation mechanism?  I humbly submit that F# should be the language of BI.

Why F#?  Well, functional programming in general is oriented towards the folding, summarization, reduction, and calculation of sets of information – that is –data.  SQL is mostly a functional, declarative language anyway, so moving to F# as the lingua-franca of data should be a no-brainer.  Imagine a world where BI is:

* Persistence Ignorant rather than Persistence Obsessed

* Portable from tool to tool – so long as it can parse F#

* BI authoring tools allow business users to use a GUI to write F# constructs rather than balkanized SQL constructs

* The benefits of a modern functional language (ASTs, automatic generalization, massive parallellization, etc.) are finally tools that are easily available to BI

* Allowed to have the benefits that the agile world has brought us (testability, etc.)

Imagine a world where you write a Domain Specific Language (DSL) in F#, and the BI tools manipulate the DSL.  Imagine being being able to swap out different persistence mechanisms based on strict performance characteristics, rather than having to pay the port tax when you move from one persistence mechanism to another.  Few people in the BI world have been exposed to the recent “Cambrian explosion” of new languages that have emerged in the last few years, and that’s a shame, because some cross-pollenization would be very compelling for new kinds of solutions to emerge.

A recent Gartner CIO poll reported that CIOs must ‘Make the Difference’ by replacing generic IT with distinctive solutions that drive enterprise strategy.  This means that true BI that differentiates will likely be invested in.  It would be a shame if we continued to have all this BI live on vendor specific islands that were unable to leverage some of the state of the art work going on in computer science.  On the other hand, BI that leverages these new capabilities that the computer scientists like Don Syme are giving us will have a great chance to “make the difference”.

I conclude this with a call to action.  If you are doing BI, ask why we are using the same basic language we were using 10 years ago.  If you are a language geek or a software developer, ask why what you are doing, particularly if it generates information that is used in the strategic decision making process, isn’t considered “BI”.  Whomever is the first tool vendor to get to this vision will probably get to have a great deal of control over how it gets done – and the field is very green at the moment for someone to fill this gap :)

On Business Intelligence and F#