dot :: (y > z) > ((x > y) > (x > z))
Why Learn Haskell?
If we sincerely ask "why learn Haskell?", then we wind up learning Haskell!
Asking a "why?" question means we want to be told a reason. But would we accept any reason? It seems not, for if any answer suffices, then the answer is irrelevant, so why bother asking?
Thus the answer to our question should be a good reason. But what is a good reason?
The gold standard is a logical reason. Namely, the answer should start with acceptable premises, apply general rules to them, and reach the conclusion "learn Haskell".
What is logic? Let’s take a classic example of a logical argument:

All men are mortal.

Socrates is a man.

Therefore, Socrates is mortal.
The first two are premises that we take for granted. But there appears to be something else we take for granted. A magic "therefore" somehow produces the conclusion.
This bears further examination. The similar sounding "therefore, all mortals are Socrates" feels incorrect. There must be a hidden rule dictating what we may deduce from the two premises.
It seems we implicitly apply the following: suppose Y implies Z; then if X
implies Y, we deduce X implies Z. Let’s call this rule dot
and write it as
follows:
The (>)
symbol means "implies", and by convention, it associates to the
right, that is, a > b > c
means a > (b > c)
. Accordingly, we omit
some parentheses:
dot :: (y > z) > (x > y) > x > z
We represent the statement "all men are mortal" with:
p :: Man > Mortal
And the statement "Socrates is a man" with:
q :: Socrates > Man
Then we can write:
dot p q :: Socrates > Mortal
to indicate we applied dot
to the two premises to deduce the conclusion.
Our notation is vital for determining if we are reasoning logically or not. Instead of hiding potentially problematic details, we explicitly point out the logical rule we’re using.
In other words, we should agree on the rules of a game before playing, and during play, we should be clear which rules allow our moves.
Apply Now
We ought to state more precisely what application means. Suppose f
and g
establish that:
f :: x > y g :: x
Then we may apply f
to g
to deduce that:
f g :: y
Here, x
and y
stand for arbitrary premises, such as Man > Mortal
, or
((Socrates > a > Plato > b) > (Plato > c)) > (d > Socrates)
.
In our example above, we’re taking x, y, z
to be Socrates, Man, Mortal
so
the dot
rule applies.
By convention, application associates to the left, so instead of (f g) h
we
may write f g h
. Thus dot p q
is actually a twostep process.
Setting y
and z
to Man
and Mortal
is enough to get dot p
:
dot p :: (x > Man) > x > Mortal
Then setting x
to Socrates
gets us to (dot p) q
:
dot p q :: Socrates > Mortal
Overruled
We can devise many more reasonable logical rules.
If X is true, then X is true. Let’s call this id
:
id :: x > x
If X is true, then if Y is true, X is still true. Let’s call this const
:
const :: x > y > x
If X implies that Y implies Z, then if X implies Y, then we must have X implies
Z. Let’s call this ap
:
ap :: (x > y > z) > (x > y) > x > z
It seems we can keep making up plausible rules as we go. In the extreme, we could conjure a rule out of thin air for every "therefore". This would make logic unconvincing.
The alternative is to commit to a small set of reasonable axioms that we take for granted, from which we deduce all other logical rules. How do we set this up?
Once again, our notation saves us. Taking x, y, z
to be
a, b > a, a
in the ap
rule, we deduce:
ap const const :: a > a
That is, the id
rule is redundant because it’s implied by ap
and const
.
The dot
rule is similarly redundant. After appropriate substitutions, we find:
ap (const ap) const :: (y > z) > (x > y) > x > z
Generalizing this leads to a Hilbert system,
which constructs any reasonable rule of logic from the ap
and const
axioms
alone. (More accurately, these rules take care of the intuitionistic fragment
of propositional logic.)
QED
We began with a "why" question. Then, to determine whether an answer to our question is satisfactory, we developed notation, which is in fact pure unadulterated Haskell.
Therefore sincerely asking "Why learn Haskell?" leads to learning Haskell.
It so happens that computers can interpret Haskell too, so we can enlist their aid in checking the validity of logical arguments. This is invaluable in practice, because humans are bad at following finicky rules.
Epilogue
We may have oversimplified when translating "all men are mortal" and "Socrates is a man" to our notation. Firstorder logic is a better suited for this example. But it’s fine to start small. If we can’t handle the simplified version, how can we handle the real deal?
We can type the above code in ghci
:
> import Control.Monad > dot = ap (const ap) const > :t dot dot :: (a > b1) > (b2 > a) > b2 > b1 > data Socrates = Socrates > data Man = Man > data Mortal = Mortal > p Man = Mortal > q Socrates = Man > :t dot p q dot p q :: Socrates > Mortal
The dot
function should really be (.)
but I wanted to avoid discussing
Haskell’s infix operators. This is also why I chose ap
instead of (<*>)
.
Instead of (.) id const ap
,
theoreticians say B I K S
,
and they call these combinators.
Given an expression like ap const const
, how do we know if it makes sense?
That is, how do we figure out substitutions like replacing y
with b > a
?
HindleyMilner
type inference determines if such a substitution is possible, and if so,
returns the most general solution.
We’ve ignored inconvenient facts. Haskell is unsound because, for example,
undefined
is a proof of any proposition. But if you know this, then you
already know way too much to bother reading this article!
We can easily augment our
logic in Haskell.
We get logical disjunction with Left Right either
, logical
conjunction with (,) fst snd
, and logical negation with the Data.Void
type.
I skipped these because logical implication alone is enough to get to the moral of the story.
By adding Peano’s axioms to logic, we can prove a lot of mathematical theorems. This is possible to do in Haskell with language extensions, but other languages such as Coq, Agda, and Idris are better at this.