This article is a guest post from Mikael Brevik, who is a speaker at JSConf Budapest on 14-15th May 2015.

Intro

Once upon a time in web development, we had perfect mental models through static HTML. We could predict the output without giving it too much thought. If we were to change any of the contents on the site, we did a full refresh and we could still mentally visualise what the output would be. We would communicate between elements on the website by a simple protocol of text and values, through attributes and children. But in time, as the web got more complex, and we started to think about them as applications, we got the need for doing relative updates without doing full page refresh. The need to change some sub-part of the view without any server-side request. We started building up state in the DOM, and we broke the static mental model. This made our applications harder to reason about. Instead of just being able to look at the code and know what it was doing, we have to try really, really hard to imagine what the built up state was at any given point.

Making web applications got harder as the systems got more and more complex, and a lot of this has to do with state. We should be able to reason about a application in a simpler way, and building complex systems by combining small pieces of components which is more focused and doesn't require us to know what is happening in other parts for the system - as with HTML.

Functions and Purity

How can we go back to the days of static mental models and just being able to read the code from top-to-bottom? We still need to do dynamic update of the view, as we want interactive and living pages that react to users, but still have the model of refreshing the entire site. To achieve this we can take a functional approach and build a idempotent system. That is, a system which given the same input it produces the same output.

Let us introduce the concept of functions with referential transparency. These are functions where we can just replace their invocations with their output values, and the system would still work as if the function was invoked. A function that is referentially transparent, is also pure. That is, a function that has no side-effect. A pure and referentially transparent function, is predictable in the sense that given an input, it always return the same output.

const timesTwo = (a) => a*2;

timesTwo(2) + timesTwo(2)
//=> 8

2 * timesTwo(2)
//=> 8

4 + 4
//=> 8

The function timesTwo as seen above, is both pure and referentially transparent. We can easily switch out timesTwo(2) with the result 4 and our system would still work as before. There are no side-effects inside the function that alter the state of our application, other than its output. We have the static mental model, as we can read the contents, from top-to-bottom, and based on the input we can predict the output.

Be wary though. Sometimes you can have side-effects without knowing it. This often happens through mutation of passed in objects. Not only can you have side-effects, but you can create horizontally coupled functions which can alter each others behaviour in unexpected ways. Consider the following:

const obj = { foo: 'bar' };

const coupledOne = (input) =>
  console.log(input.foo = 'foo');

const coupledTwo = (input) =>
  // move to end of message queue, simulate async behaviour
  setTimeout(_ => console.log(input));

> coupledTwo(obj) // prints 'foo' !!!!!
> coupledOne(obj) // prints 'foo'

Of course, the above code sample is utterly stupid and very obvious, but something similar can happen more indirectly and is fairly common. You get passed a reference to an object, and without thinking about it, you mutate the contents of that object. Other functions can be dependent on that object and get surprising behaviour. The solution is not to mutate the input by making a copy of the input and returning of the newly created copy (treating the data as immutable).

By having our functions referentially transparent, we get predictability. We can trust our function to if it returns a result one time, it returns the same output every time - given the same input.

const timesTwo = (a) => a*2;
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4

And by having our system predictable, it is also testable. No need to build up a big state which our system relies on, we can take one function and know the contract it expects (the input), and expect the same output. No need to test the inner workings of a function, just the output. Never test how it works, just that it works.

const timesTwo = (a) => a*2;
expect(timesTwo(1)).to.equal(2)
expect(timesTwo(2)).to.equal(4)
expect(timesTwo(3)).to.equal(6)
expect(timesTwo(-9999)).to.equal(-19998)

Composability and Higher Order Functions

But we don't get large, usable system, by just having some functions. Or do we? We can combine several smaller functions to build a complex, advanced system. If we think about it, a system is just handling data and transforming values and list of values to different values and list of values. And by having all functions transparent, we can use functions as higher order functions to compose them in different ways. Higher order functions are, as probably explained many times, just functions that can be used as input to other functions or be returned from functions. In javascript we use higher order functions every day, maybe without thinking about them as higher order functions. A callback is one example of a higher order function.

We can use higher order functions to create new functions which can be derived from one or more other higher order functions. One easy example is a Maybe function. Which can decorate a function into being null safe. Below we see a naive implementation of the maybe decorator. We won't get into the full implementation here, but you can see an example in Reginald Braithwaite's fantastic book, Allongé©.

const maybe = function (fn) {
  return function (input) {
    if (!input) return;
    return fn.call(this, input);
  };
};

const impl1 = input => input.toLowerCase();
impl(void 0) // would crash

const impl2 = maybe(input => input.toLowerCase());
impl2(void 0) // would **not** crash

Another use of higher order functions is to take two or more functions and combining them to one. This is where our pure functions really shine. We can implement a function, compose, which takes two functions and pipes the result of one function as input into the other: Taking two different functions and create a new, derived, function as the combination of the two. Let's look at another naive implementation:

const compose = (fn1, fn2) =>
  input => fn1(fn2(input));

// Composing two functions
const prefix = (i) => 'Some Text: ' + i;
const shrink = (i) => i.toLowerCase();

const composed = compose(prefix, shrink);
composed(foo) //=> 'Some Text: foo'

The last building block we will look at is partial application. The act of deriving a function, creating a new function with some pre-set inputs. Let's say we have function taking two inputs: a and b, but we want to have a function that only takes one input, b, where the input a is set to a specific value.

const partial = (fn, a) =>
  (b) => fn(a, b);

const greet = (greeting, name) =>
  greeting + ', ' + b + '!';

const hello = partial(greet, 'Hello');

hello('Hank Pym') //=> 'Hello, Hank Pym!'

And we can of course compose all the different examples into one happy function.

const shrinkedHello = maybe(compose(
  partial(greet, 'Hello'),
  shrink));

shrinkedHello(void 0) // not crashing
shrinkedHello('HANK PYM') //=> 'Hello, hank pym!'

Now we got a basic understanding of how to combine small building blocks to get functions that do more complex stuff. As each and every "primitive" function we have is pure and referentially transparent, our derived functions will be as well. This means our system will be idempotent. However, there is one thing we are missing: communication with the DOM.

The DOM is a Side-effect

We want our system to output something other than to the console. Our application should show pretty boxes with useful information in them. We're not able to do that without interacting with the DOM (or some other output end-point). Before we move on, it's one important thing to remember: The DOM is a huge side-effect and a massive bundle of state. Consider the following code, which is similar to the example of tight coupling of functions through objects from earlier:

dom('#foo').innerHTML = 'bar'
const coupledOne = (input) =>
  input.innerText = 'foo';

const coupledTwo = (input) =>
  setTimeout(_ =>
    console.log(input.innerText));

coupledTwo(dom('#foo')) //=> 'foo' !!!!!
coupledOne(dom('#foo')) //=> 'foo'

We need to treat the DOM as the integration point it is. As with any other integration point, we want to handle it at the far edges of our data flow. Just to represent the output of our system, not use it as our blob of state. Instead of letting our functions handle the interaction with the DOM, we do that somewhere else. Look at the following example/pseudo code:

const myComp = i => <h1>{i}</h1>;
const myCompTwo = i => <h2>{myComp(i)}</h2>;

const output = myComp('Hank Pym');
const newOutput = output + myComp('Ant-Man');


// Persist to the DOM somewhere
domUpdate(newOutput);

A Virtual DOM, like the one React has, is a way to allow us to abstract away the integration with the DOM. Moreover, it allows us to do a dynamic page refresh, semantically just like static HTML, but without the browser actually doing the refresh (and doing it performant with diff-ing between the changes and only actually interacting with the DOM when necessary).

const myComp = i => <h1>{i}</h1>;
const myCompTwo = i => <h2>{myComp(i)}</h2>;

const output = myComp('Hank Pym');

domUpdate(output);

const newOutput = output + myComp('Ant-Man');

// only update the second output
domUpdate(newOutput);

What we've seen in the two last examples aren't "normal" functions, they are view components. Functions which returns a view representation to be passed to a Virtual DOM.

Higher Order Components

Everything we've seen about functions is also true for components. We can build complex views by combining many small, lesser complex, components. We also get the static mental model of pure and referentially transparent functions but with views. We get the same reasoning as we had in the good old days with HTML, but instead of just communicating with simple strings and values, we can communicate with more complex objects and metadata. But the communication can still work as with HTML, where as the information is passed from the top.

Referentially transparent components, will give us predictable views and this means testable views.

const myComp = component(input => <h1>{input}</h1>);

expect(renderToString(myComp('Hank Pym')).to.equal('<h1>Hank Pym</h1>')
expect(renderToString(myComp('Sam Wilson')).to.equal('<h1>Sam Wilson</h1>')

We can use combinators (functions which operate on higher order functions and combine behavior) like map, which is a fairly common pattern in React. This would work exactly as you'd expect. Where we can transform a list of data into a list of components representing that data.

const listItem = component(i => <li>{i}</li>);

const output = ['Wade', 'Hank', 'Cable'].map(listItem);
// output is now list of names

The components created in this example is made using a library, called Omniscient.js, which adds syntactic sugar on top of React components for encouraging referentially transparent components. Documentation of the library can be seen on the homepage http://omniscientjs.github.io/.

These kind of components can also be composed in different ways. For instance we can communicate in a nested structure, where the components are passed as children.

const myComp = component(input => <h1>{input}</h1>);
const myCompTwo = component(input => <div>{myComp(input)}</div>);

const output = myCompTwo('Hank Pym');

Here we define myComp as a explicit child of myCompTwo. But this way would hard bind myCompTwo to myComp and you wouldn't be able to use myCompTwo without the other. We can borrow concepts of our previously defined combinators (i.e. compose) to derive a component which would leave both myComp and myCompTwo usable without each other.

const h1 = component(i => <h1>{i}</h1>);
const em = component(i => <em>{i}</em>);

const italicH1 = compose(h1, em);
var output = italicH1('Wade Wilson');

In the example above, we create the derived component italicH1 which has the composed behaviour of both h1 and em, but we can still use both h1 and em independently. This is just like we saw previously with pure functions. We can't use the exact same implementation of compose as before, but we can do a similar approach. A straightforward implementation could be something like the following:

function compose (...fns) {
  return (...args) =>
    fns.reduceRight((child, fn) =>
      fn.apply(this,
        child ? args.concat(child) : args),
      null);
};

This function takes all passed components and, from the right, reduces to pass all accumulated children until there are no more components to accumulate.

We can also borrow the concept of partial applications to derive new components. As an example, imagine we have a header element which can take options to define a class name and title text passed as a child. If we want to use that component several times throughout our system, we wouldn't want to pass in the class name as a string everywhere, but rather create a component that is a type of component which has that class name. So we could create a header one element that is underlinedH1.

const comp = component(({children, className}) =>
  <h1 className={className}>{children}</h1>
);

const underlinedH1 = partial(comp, {
  className: 'underline-title'
});
var output = underlinedH1('Hank');

We derive a component which always returns an underlined header. The code for implementing partial applications is a bit more complicated and can be seen as a gist. Following the functional pattern further, we can also do something like the maybe decorator with components as well:

const maybe = function (fn) {
  return (input) => {
    if (!input) return <span />;
    return fn(input);
  };
};

const comp = maybe(component(({children}) => <h1>{children}</h1>));

We can combine the different transformation functions, partial applications and components as we did with functions.

const greet = component(({greeting, children}) =>
  <h1>{greeting}, {children}!</h1>
);

const shrinkedHello = maybe(compose(
  partial(greet, 'Hello'),
  shrink));

Summary

In this post we've seen how we can use functional programming to make systems that is much easier to reason about, and how to get systems that have a static mental model, much like we had with the good old HTML. Instead of just communicating with attributes and values, we can have a protocol with more complex objects where we can even pass down functions or something like event emitters.

We've also seen how we can use the same principles and building blocks to make predictable and testable views, where we always have the same output given the input. This makes our application more robust and we get a clear separation of concern. This is a product of having multiple smaller components which we can re-use in different settings, both directly and in derived forms.

Although, the examples shown in this blog post uses Virtual DOM and React, the concepts are sound even without that implementation, and is something you could think about when building your views.

Disclaimer: This is an ongoing experiment and some of the concepts of combinators on higher order components aren't too well tested and is more of a conceptual thought than actual perfect implementations. The code works conceptually and with basic implementations, but hasn't been used excessively.

See more on Omniscient.js and referentially transparent on the project homepage http://omniscientjs.github.io/ or feel free to ask questions using issues.

This article is a guest post from Mikael Brevik, who is a speaker at JSConf Budapest on 14-15th May 2015.