Handling React Forms with Mobx Observables

Handling React Forms with Mobx Observables

When you’re building a web application, you have to handle forms to get input from your users.

Unfortunately, forms in React are not that straightforward at the beginning; especially if you are used to a full featured-framework like Angular.js - and I've seen people asking about handling react forms with Mobx multiple times.

In this post, I am going to explain a simple approach to handle React forms, without using an already existing form library. At the same time, I'll describe a few techniques and patterns that you can use in your applications.

This is the first part of the tutorial series about handling React forms using Mobx Observables.

  • First part: Handling the form data (you are reading it now)
  • Second part: Validating the form (coming soon)

Core ideas:

  • Handling inputs onChange events
  • Creating reusable components
  • How to use Higher-Order Components (HOC)
  • Simple data management and rendering with observables

I will start from the basic principle that lets us modify the form data and iterate over the idea until we reach a simple solution. Note that while I am going to use observables and Mobx, most of the code and ideas here can be applied in general.

Github repo available

There is a Github repo available with the full code created for this article.

I'll indicate when to check it out (tags) in every section. It is highly recommended that you do so while reading this article because only a subset of the code will be displayed on this page.

If you’re already familiar with Mobx, I recommend to jump directly to the React forms section of the article.

What is mobx and why use it?

Mobx is a library that allows you to create observable data. It has bindings for React, which means that it allows React components to update automatically when the data they depend on changes.

It allowed me to greatly simplify my applications compared to the usually recommended flux architecture with libraries like Redux.

mobx data flow with react forms

Working with Mobx is simple because you can work with objects the way you have always done in javascript (simply changing object properties values) and you can also achieve great rendering performance with no effort at all.

immutables against observables, mobx against redux

So, if you don't know Mobx yet, I encourage you to check their site and the presentations they have.

React forms

Let's start with the handling of form inputs.

In React, there is a concept called "controlled input." This means the following:

  • The input value is set with the props provided through React.
  • The form data and the input value is updated through an onChange handler.
// example inside a component
render () {  
  return <input type="text"

For further info, check out the React controlled forms documentation.

The onChange "trick"

Let's start with a secret: the onChange handler.

It is about providing not only the new value but also "what" should be updated.

Given a certain form input, I will use the name attribute to tell what property needs to be updated along with its new value.

onChange (event) {  

It is inspired by PHP, where it is possible to handle arrays in HTML forms like this:

<form action="person.php" method="POST">  
  <input name="person[email]" />
  <input name="person[phone]" />

The form values would be parsed as you can imagine in PHP.

Result of $_POST:

    'person' => array(
        'email' => ''
        'phone' => ''

Back to javascript; imagine a person's data (Name, address, job, ...):

To update the name in javascript, the only thing you would need to do is:

person.fullName = 'Jack'  

Let's imagine a generic updateProperty function that should update any property of the person:

function updateProperty (key, value) {  
  person[key] = value

Simple. Now let's put the things together.

Creating the React form components

Article repo: git checkout step-1-basics

Let's glue the pieces together with React and Mobx to handle a form for this person:

First, let's create an observable person with mobx.
This is done by passing your object to mobx.observable.

Then let's create PersonForm.js: the React form component for a person, starting with the person's name. It will receive the observable person data as a prop.

How does this work?

  1. When the user types in the field, the onChange handler gets the corresponding person property to update: "fullName".

  2. It updates the person data by calling the updateProperty method with the new value.

  3. The field will be re-rendered by React with the updated value thanks to the component being a Mobx observer that is reacting to changes in the "observable person".

Note: if you look at the repo code, I am creating the observable person data in the app constructor and pass it to the form.

It is up to you to decide how you provide the data to your form component and how you will submit it (fetch API, store, actions), but I'll come back to it later. (App component gist)

First refactor: InputField component

Article repo: git checkout step-2-inputfield

So far, we have updated one property and, while we could simply do some copy paste to update the email and the job, we will do something better.

Let's create an input component that will "emit" what we need by default, plus some extras.

  • My input is an observer.
  • By default, it will call the onChange handler with the field name and the new value.
  • Let's add some extra markup: a label to show benefits of reusing components.

And that's how I can use it in my person form:

  • I don't need an onChange handler in my form component anymore.
  • I can pass the updateProperty handler directly to my inputs.

Important benefit of this approach

By default, React updates the whole component subtree and, as you might know, you can define the method shouldComponentUpdate to spare unnecessary updates. As a developer, you then either have to deal with immutables or do some tedious manual updates.

But, by using mobx-react observers, the shouldComponentUpdate method will be implemented for you. This means that updating one field will trigger the re-rendering of this field only. You get the best performance without any effort. React docs: shouldComponentUpdated

What about complex forms?

Actually, at this point, you already know how to deal with them. That's the beauty of this simple approach.

Article repo: git checkout step-3-nestedform

Deep objects

My person had an address.

To update the address, consider it a nested form and apply the same principle.

Create a PersonAddress form component that is just the same as the "base" Person form component, and reuse the InputField component:

And use it in the Person form:

Arrays of objects

Article repo: git checkout step-4-form-array

Consider them arrays of forms.

For example our person now got some tasks:

Create a PersonTask form component. We can use the same concept for the address component.

Then, just "map":

Second refactor: form capabilities as higher order component

Article repo: git checkout step-5-hoc

As you might have noticed, we are still repeating some code in every form / subform.

The form data update mechanism:

  constructor (props) {
    this.updateProperty = this.updateProperty.bind(this)

  updateProperty (key, value) {
    this.props.address[key] = value

Instead of this, let’s extract this logic to a higher order component.

What is a Higher Order Component (HOC)?

A higher order component is a function.

It takes a component as argument and will return another component that wraps it, adding any kind of behavior you want it to have.

In the case of our forms, we will create "asForm"; an HOC that provides the form data update mechanism to any component.

What you can see in the code:

  • asForm is a function.
  • Its first argument, MyComponent should be a component.
  • It returns a new component called Form that wraps MyComponent.
  • It adds the form update mechanism as a prop to MyComponent: updateProperty.

  • about the second argument formDataProp: it should be the name (string) of the prop that points to the form data. You might be passing more props to your form like UI related stuff for example. It's a simple way to indicate what should be updated.

Using the asForm higher order component

Let's take the address component and refactor it.

As you can see:

PersonAddress component now is very simple, we have extracted any logic related to the address updates.

  • We imported the asForm HOC and wrapped the address component, indicating which props has the form data. (last line)
  • We simply used the onChange handler provided by the asForm HOC, for the Inputs.

And that's it. We can repeat the refactor process for the tasks forms (or any other). From now on, the developer only needs to care about the form presentation by providing the relevant inputs.

What about other types of input?

Article repo: git checkout step-6-radio-check

Choosing input types is about what you want from your user: you may want to force your users to choose only one option from many (radio), or as many optional options as they want (checkboxes).

You can apply to radio and checkboxes the same principle that was used for input [text|email|number]: emit name and value from the onChange.

While radio and checkboxes are "native" components of the browser, you can create your own input components / UX to achieve this. You can check the repo to see how radio and checkbox can be handled.(step-6-radio-check)

Last example: a list of checkboxes

Article repo: git checkout step-7-checklist

It was simple until now but, we don't always have a simple value to update. What about arrays?

Let's say that we want to ask a person what mascots she has. For this your model is an array of simple values like:
mascots: ['dog', 'cat'] and the list itself will present more animals.

We will follow the same principles like before:

  • First, let’s add a new handler to the asForm HOC. This handler will simply remove or add an element of an array. Let's call it updateArray.
  • Create a component "InputCheckboxes" that takes a list of items and the list of currently selected items. It will render it as a list of checkboxes.

You can check the repo or this InputCheckboxes gist for implementation details.

It would be used in our PersonForm component as below.

const mascots = ['bird', 'cat', 'dog', 'iguana', 'pig', 'other']  
<InputCheckboxes items={mascots} name="mascots" checkedItems={person.mascots} onChange={updateArray}/>  

As you can see, compared to previous examples, we are passing updateArray instead of updateProperty for the onChange handler.

Submitting the form

Article repo: git checkout step-8-submit

I have created a last step where your can check how to submit the form.

We simply have to pass a submit handler to the form component. This is where you might trigger an "action" and call your services API's.


We have seen how easy it is to create reusable form helpers with a Higher Order Component. You can extend your form HOC update handlers to fit any of your data structure combined with any UX you wish with React components.

React views updates automatically and mobx optimizes the rendering.

Next up

In the second part of the article (coming soon), I will show you how you can validate the form itself. In the meantime, share your thoughts in the comments section.

React.js Best Practices for 2016

React.js Best Practices for 2016

2015 was the year of React with tons of new releases and developer conferences dedicated to the topic all over the world. For a detailed list of the most important milestones of last year, check out our React in 2015 wrap up.

The most interesting question for 2016: How should we write an application and what are the recommended libraries?

As a developer working for a long time with React.js I have my own answers and best practices, but it's possible that you won’t agree on everything with me. I’m interested in your ideas and opinions: please leave a comment so we can discuss them.

React.js logo - Best Practices for 2016

If you are just getting started with React.js, check out our React.js tutorial, or the React howto by Pete Hunt.

Dealing with data

Handling data in a React.js application is super easy, but challenging at the same time.
It happens because you can pass properties to a React component in a lot of ways to build a rendering tree from it; however it's not always obvious how you should update your view.

2015 started with the releases of different Flux libraries and continued with more functional and reactive solutions.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant backend for React appliactions using Trace
Learn more

Let's see where we are now:


According to our experience, Flux is often overused (meaning that people use it even if they don't even need it).

Flux provides a clean way to store and update your application's state and trigger rendering when it's needed.

Flux can be useful for the app's global states like: managing logged in user, the state of a router or active account but it can turn quickly into pain if you start to manage your temporary or local data with it.

We don’t recommend using Flux for managing route-related data like /items/:itemId. Instead, just fetch it and store it in your component's state. In this case, it will be destroyed when your component goes away.

If you need more info about Flux, The Evolution of Flux Frameworks is a great read.

Use redux

Redux is a predictable state container for JavaScript apps.

If you think you need Flux or a similar solution you should check out redux and Dan Abramov's Getting started with redux course to quickly boost your development skills.

Redux evolves the ideas of Flux but avoids its complexity by taking cues from Elm.

Keep your state flat

API's often return nested resources. It can be hard to deal with them in a Flux or Redux-based architecture. We recommend to flatten them with a library like normalizr and keep your state as flat as possible.

Hint for pros:

const data = normalize(response, arrayOf(schema.user))

state = _.merge(state, data.entities)  

(we use isomorphic-fetch to communicate with our APIs)

Use immutable states

Shared mutable state is the root of all evil - Pete Hunt, React.js Conf 2015

Immutable logo for React.js Best Practices 2016

Immutable object is an object whose state cannot be modified after it is created.

Immutable objects can save us all a headache and improve the rendering performance with their reference-level equality checks. Like in the shouldComponentUpdate:

shouldComponentUpdate(nexProps) {  
 // instead of object deep comparsion
 return this.props.immutableFoo !== nexProps.immutableFoo

How to achieve immutability in JavaScript?
The hard way is to be careful and write code like the example below, which you should always check in your unit tests with deep-freeze-node (freeze before the mutation and verify the result after it).

return {  

return arr1.concat(arr2)  

Believe me, these were the pretty obvious examples.

The less complicated way but also less natural one is to use Immutable.js.

import { fromJS } from 'immutable'

const state = fromJS({ bar: 'biz' })  
const newState = foo.set('bar', 'baz')  

Immutable.js is fast, and the idea behind it is beautiful. I recommend watching the Immutable Data and React video by Lee Byron even if you don't want to use it. It will give deep insight to understand how it works.

Observables and reactive solutions

If you don't like Flux/Redux or just want to be more reactive, don't be disappointed! There are other solutions to deal with your data. Here is a short list of libraries what you are probably looking for:

  • cycle.js ("A functional and reactive JavaScript framework for cleaner code")
  • rx-flux ("The Flux architecture with RxJS")
  • redux-rx ("RxJS utilities for Redux.")
  • mobservable ("Observable data. Reactive functions. Simple code.")


Almost every client side application has some routing. If you are using React.js in a browser, you will reach the point when you should pick a library.

Our chosen one is the react-router by the excellent rackt community. Rackt always ships quality resources for React.js lovers.

To integrate react-router check out their documentation, but what's more important here: if you use Flux/Redux we recommend to keep your router's state in sync with your store/global state.

Synchronized router states will help you to control router behaviors by Flux/Redux actions and read router states and parameters in your components.

Redux users can simply do it with the redux-simple-router library.

Code splitting, lazy loading

Only a few of webpack users know that it's possible to split your application’s code to separate the bundler's output to multiple JavaScript chunks:

require.ensure([], () => {  
  const Profile = require('./Profile.js')
    currentComponent: Profile

It can be extremely useful in large applications because the user's browser doesn't have to download rarely used codes like the profile page after every deploy.

Having more chunks will cause more HTTP requests - but that’s not a problem with HTTP/2 multiplexed.

Combining with chunk hashing you can also optimize your cache hit ratio after code changes.

The next version of react-router will help a lot in code splitting.

For the future of react-router check out this blog post by Ryan Florence: Welcome to Future of Web Application Delivery.


A lot of people are complaining about JSX. First of all, you should know that it’s optional in React.

At the end of the day, it will be compiled to JavaScript with Babel. You can write JavaScript instead of JSX, but it feels more natural to use JSX while you are working with HTML.
Especially because even less technical people could still understand and modify the required parts.

JSX is a JavaScript syntax extension that looks similar to XML. You can use a simple JSX syntactic transform with React. - JSX in depth

If you want to read more about JSX check out the JSX Looks Like An Abomination - But it’s Good for You article.

Use Classes

React works well with ES2015 classes.

class HelloMessage extends React.Component {  
  render() {
    return <div>Hello {}</div>

We prefer higher order components over mixins so for us leaving createClass was more like a syntactical question rather than a technical one. We believe there is nothing wrong with using createClass over React.Component and vice-versa.


If you still don't check your properties, you should start 2016 with fixing this. It can save hours for you, believe me.

MyComponent.propTypes = {  
  isLoading: PropTypes.bool.isRequired,
  items: ImmutablePropTypes.listOf(
      name: PropTypes.string.isRequired,

Yes, it's possible to validate Immutable.js properties as well with react-immutable-proptypes.

Higher order components

Now that mixins are dead and not supported in ES6 Class components we should look for a different approach.

What is a higher order component?

PassData({ foo: 'bar' })(MyComponent)  

Basically, you compose a new component from your original one and extend its behaviour. You can use it in various situations like authentication: requireAuth({ role: 'admin' })(MyComponent) (check for a user in higher component and redirect if the user is not logged in) or connecting your component with Flux/Redux store.

At RisingStack, we also like to separate data fetching and controller-like logic to higher order components and keep our views as simple as possible.


Testing with good test coverage must be an important part of your development cycle. Luckily, the React.js community came up with excellent libraries to help us achieve this.

Component testing

One of our favorite library for component testing is enzyme by AirBnb. With it's shallow rendering feature you can test logic and rendering output of your components, which is pretty amazing. It still cannot replace your selenium tests, but you can step up to a new level of frontend testing with it.

it('simulates click events', () => {  
  const onButtonClick = sinon.spy()
  const wrapper = shallow(
    <Foo onButtonClick={onButtonClick} />

Looks neat, isn't it?

Do you use chai as assertion library? You will like chai-enyzime!

Redux testing

Testing a reducer should be easy, it responds to the incoming actions and turns the previous state to a new one:

it('should set token', () => {  
  const nextState = reducer(undefined, {
    type: USER_SET_TOKEN,
    token: 'my-token'

  // immutable.js state output
    token: 'my-token'

Testing actions is simple until you start to use async ones. For testing async redux actions we recommend to check out redux-mock-store, it can help a lot.

it('should dispatch action', (done) => {  
  const getState = {}
  const action = { type: 'ADD_TODO' }
  const expectedActions = [action]

  const store = mockStore(getState, expectedActions, done)

For deeper redux testing visit the official documentation.

Use npm

However React.js works well without code bundling we recommend using Webpack or Browserify to have the power of npm. Npm is full of quality React.js packages, and it can help to manage your dependencies in a nice way.

(Please don’t forget to reuse your own components, it’s an excellent way to optimize your code.)

Bundle size

This question is not React-related but because most people bundle their React application I think it’s important to mention it here.

While you are bundling your source code, always be aware of your bundle’s file size. To keep it at the minimum you should consider how you require/import your dependencies.

Check the following code snippet, the two different way can make a huge difference in the output:

import { concat, sortBy, map, sample } from 'lodash'

// vs.
import concat from 'lodash/concat';  
import sortBy from 'lodash/sortBy';  
import map from 'lodash/map';  
import sample from 'lodash/sample';  

Check out the Reduce Your bundle.js File Size By Doing This One Thing for more details.

We also like to split our code to least vendors.js and app.js because vendors updates less frequently than our code base.
With hashing the output file names (chunk hash in WebPack) and caching them for the long term, we can dramatically reduce the size of the code what needs to be downloaded by returning visitors on the site. Combining it with lazy loading you can imagine how optimal can it be.

If you are new to Webpack, check out this excellent React webpack cookbook.

Component-level hot reload

If you ever wrote a single page application with livereload, probably you know how annoying it is when you are working on something stateful, and the whole page just reloads while you hit a save in your editor. You have to click through the application again, and you will go crazy repeating this a lot.

With React, it's possible to reload a component while keeping its states - boom, no more pain!

To setup hot reload check out the react-transform-boilerplate.

Use ES2015

I mentioned that we use JSX in our React.js components what we transpile with Babel.js.

Babel logo in React.js Best Practices 2016

Babel can do much more and also makes possible to write ES6/ES2015 code for browsers today. At RisingStack, we use ES2015 features on both server and client side which are available in the latest LTS Node.js version.


Maybe you already use a style guide for your JavaScript code but did you know that there are style guides for React as well? We highly recommend to pick one and start following it.

At RisingStack, we also enforce our linters to run on the CI system and for git push as well. Check out pre-push or pre-commit.

We use JavaScript Standard Style for JavaScript with eslint-plugin-react to lint our React.js code.

(That's right, we do not use semicolons anymore.)

GraphQL and Relay

GraphQL and Relay are relatively new technologies. At RisingStack, we don’t use it in production for now, just keeping our eyes open.

We wrote a library called graffiti which is a MongoDB ORM for Relay and makes it possible to create a GraphQL server from your existing mongoose models.
If you would like to learn these new technologies we recommend to check it out and play with it.

Takeaway from these React.js Best Practices

Some of the highlighted techniques and libraries are not React.js related at all - always keep your eyes open and check what others in the community do. The React community is inspired a lot by the Elm architecture in 2015.

If you know about other essential React.js tools that people should use in 2016, let us know in the comments!

React in 2015 - Retrospection

React in 2015 - Retrospection

In details

2015 was the year of React

React had an amazing year with tons of new releases and developer conferences thanks to the contribution of the open-source community and enterprise adopters. As a result, React is used by companies like Facebook, Yahoo, Imgur, Mozilla, Airbnb, Netflix, Sberbank and much more. For a more detailed list check out this collection: Sites Using React

If you are not familiar with React, check out our in-depth tutorial series: The React.js Way

2015 January

2015 February

2015 March

2015 May

2015 June

2015 July

2015 August

2015 October

2015 November

2015 December

Read more about React

If you can't get enough of React head over to our related articles page.

Do you miss anything from the timeline? Let us know in the comments.

Using React with Webpack Tutorial

Using React with Webpack Tutorial

This article is a guest post from Christian Alfoni, who is a speaker among other world-class React hackers at Reactive2015 in Bratislava, November 2-4 2015.

It has been a year since I first got into React and Webpack. I have many times expressed that Webpack is amazing, but hard to configure. That being truthy I think there is a different reason why developers do not adopt it. So I want to go head first and say; "Webpack is amazing, but it is hard to understand why." In this article, I will try to convey the core of what makes Webpack great. Then we are going to look at the very latest contributions to the Webpack/React ecosystem.

The core idea of Webpack

To understand Webpack, it can often be a good idea to talk about Grunt and Gulp first. The input to a Grunt task or a Gulp pipeline is filepaths (globs). The matching files can be run through different processes. Typically transpile, concat, minify, etc. This is a really great concept, but neither Grunt or Gulp understands the structure of your project. If we compare this to Webpack, you could say that Gulp and Grunt handle files, while Webpack handles projects.

With Webpack, you give a single path. The path to your entry point. This is typically index.js or main.js. Webpack will now investigate your application. It will figure out how everything is connected through require, import, etc. statements, url values in your CSS, href values in image tags, etc. It creates a complete dependency graph of all the assets your application needs to run. All of this just pointing to one single file.

An asset is a file. It being an image, css, less, json, js, jsx etc. And this file is a node in the dependency graph created by Webpack.

|---------|         |------------|       |--------|
| main.js | ------- | styles.css | ----- | bg.png |
|---------|    |    |------------|       |--------|
               |    |--------|       |-------------|
               |--- | app.js | ----- | config.json |
                    |--------|       |-------------|

When Webpack investigates your app, it will hook on new nodes to the dependency graph. When a new node is found, it will check the file extension. If the extension matches your configuration, it will run a process on it. This process is called a loader. An example of this would be to transform the content of a .js file from ES6 to ES5. Babel is a project that does this and it has a Webpack loader. Install it with npm install babel-loader.

import path from 'path';

const config = {

  // Gives you sourcemaps without slowing down rebundling
  devtool: 'eval-source-map',
  entry: path.join(__dirname, 'app/main.js'),
  output: {
    path: path.join(__dirname, '/dist/'),
    filename: '[name].js',
    publicPath: '/'
  module: {
    loaders: [{
      test: /\.js?$/,
      exclude: /node_modules/,
      loader: 'babel'

We basically tell Webpack that whenever it finds a .js file it should be passed to the Babel loader.

This is really great, but it is just the beginning. With Webpack, a loader is not just an input/output. You can do some pretty amazing stuff that we are going to look at now. The funny thing about Webpack is that it has been out for quite some time and also the additions I am going to talk about here. For some reason, it just does not reach out... anyways, hopefully this will at least reach you now :-)

Express middleware

Using Node as a development server is really great. Maybe you run Node in production, but even if you do not you should have a Node development server. Why, you ask? Well, what web application does not talk to the server? Instead of faking requests and responses in your client application, why not do that with a Node development server? Now you can implement your application with as if you had a fully working backend. This makes the transition to production easier.

To make Webpack work with a Node backend you just have to npm install webpack-dev-middleware and bippeti-bappeti....

import path from 'path';  
import express from 'express';  
import webpack from 'webpack';  
import webpackMiddleware from 'webpack-dev-middleware';  
import config from './webpack.config.js';

const app = express();  
const compiler = webpack(config);

app.use(express.static(__dirname + '/dist'));  
app.get('*', function response(req, res) {  
  res.sendFile(path.join(__dirname, 'dist/index.html'));

app.listen(3000);! A Node development server with Webpack bundling capabilities.

ES6 on Node

As you can see, I am using ES6 code on Node. There is really no reason why the JavaScript on the client should look different than the JavaScript on the server. Since you have already installed babel-loader, which includes babel-core, you have what you need. In your package.json change the following line:

  "scripts": {
    "start": "node server.js"


  "scripts": {
    "start": "babel-node server.js"

Easy peasy. You can now even use JSX on the server. Note that babel-node is not recommended for production. You have to pre-transpile the server code and you can use Webpack for that.

Hot loading code

Hot loading code is a great concept. It makes your workflow a lot smoother. Normally you have to refresh the application and sometimes click your way back to the same state. We spend a lot of time on this, and we should not do that. As I mentioned, Webpack can do some pretty amazing things with its loaders. Hot loading styles is the first we will look at, but before that we have to make our Webpack workflow allow hot loading:

npm install webpack-hot-middleware

import path from 'path';  
import express from 'express';  
import webpack from 'webpack';  
import webpackMiddleware from 'webpack-dev-middleware';  
import webpackHotMiddleware from 'webpack-hot-middleware'; // This line  
import config from './webpack.config.js';

const app = express();  
const compiler = webpack(config);

app.use(express.static(__dirname + '/dist'));  
app.use(webpackHotMiddleware(compiler)); // And this line  
app.get('*', function response(req, res) {  
  res.sendFile(path.join(__dirname, 'dist/index.html'));


Hot loading styles

First we add a new loader to our project. This makes Webpack understand what CSS is. Specifically it will understand what a url means. It will treat this as any other require, import, etc. statement. But we do not just want to understand CSS, we also want to add it to our page. With npm install style-loader we can add behavior to our CSS loading.

import path from 'path';

const config = {

  devtool: 'eval-source-map',

  // We add an entry to connect to the hot loading middleware from
  // the page
  entry: [
    path.join(__dirname, 'app/main.js')
  output: {
    path: path.join(__dirname, '/dist/'),
    filename: '[name].js',
    publicPath: '/'

  // This plugin activates hot loading
  plugins: [
    new webpack.HotModuleReplacementPlugin(),
  module: {
    loaders: [{
      test: /\.js?$/,
      exclude: /node_modules/,
      loader: 'babel'
    }, {
      test: /\.css?$/,
      loader: 'style!css' // This are the loaders

In our config we tell Webpack to first run the css-loader and then the style-loader, it reads right to left. The css-loader makes any urls within it part of our dependency graph and the style-loader puts a style tag for the CSS in our HTML.

So now you see that we do not only process files with Webpack, we can create side effects like creating style tags. With the HOT middleware, we can even run these side effects as we change the code of the app. That means every time you change some CSS Webpack will just update the existing style tag on the page, without a refresh.

Hot loading components

I got a developer crush on Dan Abramov after he released react-hot-loader, now called react-transform. Hot loading CSS is pretty neat, but you can do the same with React components. The react-transform project is not a Webpack loader, which actually react-hot-loader was. React-transform is a Babel transform. To configure a Babel transform you first need to npm install react-transform. Then you add a file to your project called .babelrc.

  "stage": 2,
  "env": {
    "development": {
      "plugins": ["react-transform"],
      "extra": {
        "react-transform": {
          "transforms": [{
            "transform": "react-transform-hmr",
            "imports": ["react"],
            "locals": ["module"]

I have not asked Dan why he decided to make it a Babel transform instead of a Webpack loader, but probably it allows other projects than Webpack to use it. Anyways, there you have it. Now you can actually make changes to the code of your components and without any refresh they will just change in the browser and keep their current state, right in front of your eyes. Combining this with CSS hot loading and you will be a very happy developer.

CSS Modules

When I think about Tobias Koppler (Creator of Webpack) I imagine him sitting at his desk like Hugh Jackman in the movie Swordfish, though without the extra monitors for effect... and Tobias actually knows what he is doing. I do not think he has a mouse though, but a titanium alloyed keyboard to keep up with the stress of his fingers pounding on it 24/7. Webpack has an incredible codebase and Tobias manages to keep up with all advancements that fit in with it. One of these advancements is CSS Modules and of course Webpack supports it.

A short description of CSS Modules is that each CSS file you create has a local scope. Just like a JavaScript module has its local scope. The way it works is:


.header {
  color: red;


import styles from './App.css';

export default function (props) {

  return <h1 className={styles.header}>Hello world!</h1>;


You also have to update the config:

import path from 'path';

const config = {  
  module: {
    loaders: [{
      test: /\.js?$/,
      exclude: /node_modules/,
      loader: 'babel'
    }, {
      test: /\.css?$/,
      loader: 'style!css?modules&localIdentName=[name]---[local]---[hash:base64:5]'

So you only use classes and those classes can be referenced by name when you import the css file. The thing here now is that this .header class is not global. It will only work on JavaScript modules importing the file. This is fantastic news because now you get the power of CSS. :hover, [disabled], media queries, etc. but you reference the rules with JavaScript.

There are more to these CSS Modules which you can look at here. Composition being one of the most important parts. But the core concept here is that you get the power of CSS with the scoping of JavaScript modules. Fantastic!

A boilerplate for this React & Webpack tutorial

To play around with this setup, you can use this boilerplate. It is basically works like the examples shown here. Expressing project structure is difficult. Yes, we have our files and folders, but how those files are part of your application is often not obvious. With Webpack, you can stop thinking files and start thinking modules. A module is a folder with the React component, images, fonts, css and any child components. The files and folders now reflects how they are used inside your application, and that is a powerful concept.

This article is a guest post from Christian Alfoni, who is a speaker among other world-class React hackers at Reactive2015 in Bratislava, November 2-4 2015.

The React.js Way: Flux Architecture with Immutable.js

This article is the second part of the "The React.js Way" blog series. If you are not familiar with the basics, I strongly recommend you to read the first article: The React.js Way: Getting Started Tutorial.

In the previous article, we discussed the concept of the virtual DOM and how to think in the component way. Now it's time to combine them into an application and figure out how these components should communicate with each other.

Components as functions

The really cool thing in a single component is that you can think about it like a function in JavaScript. When you call a function with parameters, it returns a value. Something similar happens with a React.js component: you pass properties, and it returns with the rendered DOM. If you pass different data, you will get different responses. This makes them extremely reusable and handy to combine them into an application. This idea comes from functional programming that is not in the scope of this article. If you are interested, I highly recommend reading Mikael Brevik's Functional UI and Components as Higher Order Functions blog post to have a deeper understanding on the topic.

Top-down rendering

Ok it's cool, we can combine our components easily to form an app, but it doesn't make any sense without data. We discussed last time that with React.js your app's structure is a hierarchy that has a root node where you can pass the data as a parameter, and see how your app responds to it through the components. You pass the data at the top, and it goes down from component to component: this is called top-down rendering.

React.js component hierarchy

It's great that we pass the data at the top, and it goes down via component's properties, but how can we notify the component at a higher level in the hierarchy if something should change? For example, when the user pressed a button?
We need something that stores the actual state of our application, something that we can notify if the state should change. The new state should be passed to the root node, and the top-down rendering should be kicked in again to generate (re-render) the new output (DOM) of our application. This is where Flux comes into the picture.

Flux architecture

You may have already heard about Flux architecture and the concept of it.
I’m not going to give a very detailed overview about Flux in this article; I've already done it earlier in the Flux inspired libraries with React post.

Application architecture for building user interfaces - Facebook flux

A quick reminder: Flux is a unidirectional data flow concept where you have a Store which contains the actual state of your application as pure data. It can emit events when it's changed and let your application’s components know what should be re-rendered. It also has a Dispatcher which is a centralized hub and creates a bridge between your app and the Store. It has actions that you can call from your app, and it emits events for the Store. The Store is subscribed for those events and change its internal state when it's necessary. Easy, right? ;)

Flux arhitecture


Where are we with our current application? We have a data store that contains the actual state. We can communicate with this store and pass data to our app that responds for the incoming state with the rendered DOM. It's really cool, but sounds like lot's of rendering: (it is). Remember component hierarchy and top-down rendering - everything responds to the new data.

I mentioned earlier that virtual DOM optimizes the DOM manipulations nicely, but it doesn't mean that we shouldn't help it and minimize its workload. For this, we have to tell the component that it should be re-rendered for the incoming properties or not, based on the new and the current properties. In the React.js lifecycle you can do this with the shouldComponentUpdate.

React.js luckily has a mixin called PureRenderMixin which compares the new incoming properties with the previous one and stops rendering when it's the same. It uses the shouldComponentUpdate method internally.
That’s nice, but PureRenderMixin can't compare objects properly. It checks reference equality (===) which will be false for different objects with the same data:

boolean shouldComponentUpdate(object nextProps, object nextState)

If shouldComponentUpdate returns false, then render() will be skipped until the next state change. (In addition, componentWillUpdate and componentDidUpdate will not be called.)

var a = { foo: 'bar' };  
var b = { foo: 'bar' };

a === b; // false  

The problem here is that the components will be re-rendered for the same data if we pass it as a new object (because of the different object reference). But it also not gonna fly if we change the original Object because:

var a = { foo: 'bar' };  
var b = a; = 'baz';  
a === b; // true  

Sure it won't be hard to write a mixin that does deep object comparisons instead of reference checking, but React.js calls shouldComponentUpdate frequently and deep checking is expensive: you should avoid it.

I recommend to check out the advanced Performance with React.js article by Facebook.


The problem starts escalating quickly if our application state is a single, big, nested object like our Flux store.
We would like to keep the object reference the same when it doesn't change and have a new object when it is. This is exactly what Immutable.js does.

Immutable data cannot be changed once created, leading to much simpler application development, no defensive copying, and enabling advanced memoization and change detection techniques with simple logic.

Check the following code snippet:

var stateV1 = Immutable.fromJS({  
  users: [
    { name: 'Foo' },
    { name: 'Bar' }

var stateV2 = stateV1.updateIn(['users', 1], function () {  
  return Immutable.fromJS({
    name: 'Barbar'

stateV1 === stateV2; // false  
stateV1.getIn(['users', 0]) === stateV2.getIn(['users', 0]); // true  
stateV1.getIn(['users', 1]) === stateV2.getIn(['users', 1]); // false  

As you can see we can use === to compare our objects by reference, which means that we have a super fast way for object comparison, and it's compatible with React's PureRenderMixin. According to this we should write our entire application with Immutable.js. Our Flux Store should be an immutable object, and we pass immutable data as properties to our applications.

Now let's go back to the previous code snippet for a second and imagine that our application component hierarchy looks like this:

User state

You can see that only the red ones will be re-rendered after the change of the state because the others have the same reference as before. It means the root component and one of the users will be re-rendered.

With immutability, we optimized the rendering path and supercharged our app. With virtual DOM, it makes the "React.js way" to a blazing fast application architecture.

Learn more about how persistent immutable data structures work and watch the Immutable Data and React talk from the React.js Conf 2015.

Check out the example repository with a ES6, flux architecture, and immutable.js:

The React.js Way: Getting Started Tutorial

Update: the second part is out! Learn more about the React.js way in the second part of the series: Flux Architecture with Immutable.js.

Now that the popularity of React.js is growing blazing fast and lots of interesting stuff are coming, my friends and colleagues started asking me more about how they can start with React and how they should think in the React way.

React.js Tutorial Google Trends (Google search trends for React in programming category, Initial public release: v0.3.0, May 29, 2013)

However, React is not a framework; there are concepts, libraries and principles that turn it into a fast, compact and beautiful way to program your app on the client and server side as well.

In this two-part blog series React.js tutorial I am going to explain these concepts and give a recommendation on what to use and how. We will cover ideas and technologies like:

  • ES6 React
  • virtual DOM
  • Component-driven development
  • Immutability
  • Top-down rendering
  • Rendering path and optimization
  • Common tools/libs for bundling, ES6, request making, debugging, routing, etc.
  • Isomorphic React

And yes, we will write code. I would like to make it as practical as possible.
All the snippets and post related code are available in the RisingStack GitHub repository .

This article is the first from those two. Let's jump in!


1. Getting Started with the React.js Tutorial

If you are already familiar with React and you understand the basics, like the concept of virtual DOM and thinking in components, then this React.js tutorial is probably not for you. We will discuss intermediate topics in the upcoming parts of this series. It will be fun, I recommend you to check back later.

Is React a framework?

In a nutshell: no, it's not.
Then what the hell is it and why everybody is so keen to start using it?

React is the "View" in the application, a fast one. It also provides different ways to organize your templates and gets you think in components. In a React application, you should break down your site, page or feature into smaller pieces of components. It means that your site will be built by the combination of different components. These components are also built on the top of other components and so on. When a problem becomes challenging, you can break it down into smaller ones and solve it there. You can also reuse it somewhere else later. Think of it like the bricks of Lego. We will discuss component-driven development more deeply in this article later.

React also has this virtual DOM thing, what makes the rendering super fast but still keeps it easily understandable and controllable at the same time. You can combine this with the idea of components and have the power of top-down rendering. We will cover this topic in the second article.

Ok I admit, I still didn't answer the question. We have components and fast rendering - but why is it a game changer? Because React is mainly a concept and a library just secondly.
There are already several libraries following these ideas - doing it faster or slower - but slightly different. Like every programming concept, React has it’s own solutions, tools and libraries turning it into an ecosystem. In this ecosystem, you have to pick your own tools and build your own ~framework. I know it sounds scary but believe me, you already know most of these tools, we will just connect them to each other and later you will be very surprised how easy it is. For example for dependencies we won't use any magic, rather Node's require and npm. For the pub-sub, we will use Node's EventEmitter and as so on.

(Facebook announced Relay their framework for React at the React.js Conf in January 2015. But it's not available yet. The date of the first public release is unknown.)

Are you excited already? Let's dig in!

The Virtual DOM concept in a nutshell

To track down model changes and apply them on the DOM (alias rendering) we have to be aware of two important things:

  1. when data has changed,
  2. which DOM element(s) to be updated.

For the change detection (1) React uses an observer model instead of dirty checking (continuous model checking for changes). That’s why it doesn't have to calculate what is changed, it knows immediately. It reduces the calculations and make the app smoother. But the really cool idea here is how it manages the DOM manipulations:

For the DOM changing challenge (2) React builds the tree representation of the DOM in the memory and calculates which DOM element should change. DOM manipulation is heavy, and we would like to keep it at the minimum. Luckily, React tries to keep as much DOM elements untouched as possible. Given the less DOM manipulation can be calculated faster based on the object representation, the costs of the DOM changes are reduced nicely.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant backends for React applications using Trace
Start my free trial

Since React's diffing algorithm uses the tree representation of the DOM and re-calculates all subtrees when its’ parent got modified (marked as dirty), you should be aware of your model changes, because the whole subtree will be re-rendered then.
Don't be sad, later we will optimize this behavior together. (spoiler: with shouldComponentUpdate() and ImmutableJS)

React.js Tutorial React re-render (source: React’s diffing algorithm - Christopher Chedeau)

How to render on the server too?

Given the fact, that this kind of DOM representation uses fake DOM, it's possible to render the HTML output on the server side as well (without JSDom, PhantomJS etc.). React is also smart enough to recognize that the markup is already there (from the server) and will add only the event handlers on the client side.

Interesting: React's rendered HTML markup contains data-reactid attributes, which helps React tracking DOM nodes.

Useful links, other virtual DOM libraries

Component-driven development

It was one of the most difficult parts for me to pick up when I was learning React.
In the component-driven development, you won't see the whole site in one template.
In the beginning you will probably think that it sucks. But I'm pretty sure that later you will recognize the power of thinking in smaller pieces and work with less responsibility. It makes things easier to understand, to maintain and to cover with tests.

How should I imagine it?

Check out the picture below. This is a possible component breakdown of a feature/site. Each of the bordered areas with different colors represents a single type of component. According to this, you have the following component hierarchy:

  • FilterableProductTable

What should a component contain?

First of all it’s wise to follow the single responsibility principle and ideally, design your components to be responsible for only one thing. When you start to feel you are not doing it right anymore with your component, you should consider breaking it down into smaller ones.

Since we are talking about component hierarchy, your components will use other components as well. But let's see the code of a simple component in ES5:

var HelloComponent = React.createClass({  
    render: function() {
        return <div>Hello {}</div>;

But from now on, we will use ES6. ;)
Let’s check out the same component in ES6:

class HelloComponent extends React.Component {  
  render() {
    return <div>Hello {}</div>;


As you can see, our component is a mix of JS and HTML codes. Wait, what? HTML in my JavaScript? Yes, probably you think it's strange, but the idea here is to have everything in one place. Remember, single responsibility. It makes a component extremely flexible and reusable.

In React, it's possible to write your component in pure JS like:

  render () {
    return React.createElement("div", null, "Hello ",;

But I think it's not very comfortable to write your HTML in this way. Luckily we can write it in a JSX syntax (JavaScript extension) which let us write HTML inline:

  render () {
    return <div>Hello {}</div>;

What is JSX?
JSX is a XML-like syntax extension to ECMAScript. JSX and HTML syntax are similar but it’s different at some point. For example the HTML class attribute is called className in JSX. For more differences and gathering deeper knowledge check out Facebook’s HTML Tags vs. React Components guide.

Because JSX is not supported in browsers by default (maybe someday) we have to compile it to JS. I'll write about how to use JSX in the Setup section later. (by the way Babel can also transpile JSX to JS).

Useful links about JSX:
- JSX in depth
- Online JSX compiler
- Babel: How to use the react transformer.

What else can we add?

Each component can have an internal state, some logic, event handlers (for example: button clicks, form input changes) and it can also have inline style. Basically everything what is needed for proper displaying.

You can see a {} at the code snippet. It means we can pass properties to our components when we are building our component hierarchy. Like: <MyComponent name="John Doe" />
It makes the component reusable and makes it possible to pass our application state from the root component to the child components down, through the whole application, always just the necessary part of the data.

Check this simple React app snippet below:

class UserName extends React.Component {  
  render() {
    return <div>name: {}</div>;

class User extends React.Component {  
  render() {
    return <div>
        <h1>City: {}</h1>
        <UserName name={} />

var user = { name: 'John', city: 'San Francisco' };  
React.render(<User user={user} />, mountNode);

Useful links for building components:
- Thinking in React

React loves ES6

ES6 is here and there is no better place for trying it out than your new shiny React project.

React wasn't born with ES6 syntax, the support came this year January, in version v0.13.0.

However the scope of this article is not to explain ES6 deeply; we will use some features from it, like classes, arrows, consts and modules. For example, we will inherit our components from the React.Component class.

Given ES6 is supported partly by browsers, we will write our code in ES6 but transpile it to ES5 later and make it work with every modern browser even without ES6 support.
To achieve this, we will use the Babel transpiler. It has a nice compact intro about the supported ES6 features, I recommend to check it out: Learn ES6

Useful links about ES6
- Babel: Learn ES6
- React ES6 announcement

Bundling with Webpack and Babel

I mentioned earlier that we will involve tools you are already familiar with and build our application from the combination of those. The first tool what might be well known is the Node.js's module system and it's package manager, npm. We will write our code in the "node style" and require everything what we need. React is available as a single npm package.
This way our component will look like this:

// would be in ES5: var React = require('react/addons');
import React from 'react/addons';

class MyComponent extends React.Component { ... }

// would be in ES5: module.exports = MyComponent;
export default MyComponent;  

We are going to use other npm packages as well.
Most npm packages make sense on the client side as well,
for example we will use debug for debugging and superagent for composing requests.

Now we have a dependency system by Node (accurately ES6) and we have a solution for almost everything by npm. What's next? We should pick our favorite libraries for our problems and bundle them up in the client as a single codebase. To achieve this, we need a solution for making them run in the browser.

This is the point where we should pick a bundler. One of the most popular solutions today are Browserify and Webpack projects. Now we are going to use Webpack, because my experience is that Webpack is more preferred by the React community. However, I'm pretty sure that you can do the same with Browserify as well.

How does it work?

Webpack bundles our code and the required packages into the output file(s) for the browser. Since we are using JSX and ES6 which we would like to transpile to ES5 JS, we have to place the JSX and ES6 to ES5 transpiler into this flow as well. Actually, Babel can do the both for us. Let's just use that!

We can do that easily because Webpack is configuration-oriented

What do we need for this? First we need to install the necessary modules (starts with npm init if you don't have the package.json file yet).

Run the following commands in your terminal (Node.js or IO.js and npm is necessary for this step):

npm install --save-dev webpack  
npm install --save-dev babel  
npm install --save-dev babel-loader  

After we created the webpack.config.js file for Webpack (It's ES5, we don't have the ES6 transpiler in the webpack configuration file):

var path = require('path');

module.exports = {  
  entry: path.resolve(__dirname, '../src/client/scripts/client.js'),
  output: {
    path: path.resolve(__dirname, '../dist'),
    filename: 'bundle.js'

  module: {
    loaders: [
        test: /src\/.+.js$/,
        exclude: /node_modules/,
        loader: 'babel'

If we did it right, our application starts at ./src/scripts/client/client.js and goes to the ./dist/bundle.js for the command webpack.

After that, you can just include the bundle.js script into your index.html and it should work:
<script src="bundle.js"></script>

(Hint: you can serve your site with node-static install the module with, npm install -g node-static and start with static . to serve your folder's content on the address:

Project setup

Now we have installed and configured Webpack and Babel properly.
As in every project, we need a project structure.

Folder structure

I prefer to follow the project structure below:

    webpack.js (js config over json -> flexible)
  app/ (the React app: runs on server and client too)
      __tests__ (Jest test folder)
    index.js (just to export app)
  client/  (only browser: attach app to DOM)

The idea behind this structure is to separate the React app from the client and server code. Since our React app can run on both client and server side (=isomorphic app, we will dive deep into this in an upcoming blog post).

How to test my React app

When we are moving to a new technology, one of the most important questions should be testability. Without a good test coverage, you are playing with fire.

Ok, but which testing framework to use?
My experience is that testing a front end solution always works best with the test framework by the same creators. According to this I started to test my React apps with Jest. Jest is a test framework by Facebook and has many great features that I won't cover in this article.

I think it's more important to talk about the way of testing a React app. Luckily the single responsibility forces our components to do only one thing, so we should test only that thing. Pass the properties to our component, trigger the possible events and check the rendered output. Sounds easy, because it is.

For more practical example, I recommend checking out the Jest React.js tutorial.

Test JSX and ES6 files

To test our ES6 syntax and JSX files, we should transform them for Jest. Jest has a config variable where you can define a preprocessor (scriptPreprocessor) for that.
First we should create the preprocessor and after that pass the path to it for Jest. You can find a working example for a Babel Jest preprocessor in our repository.

Jet’s also has an example for React ES6 testing.

(The Jest config goes to the package json.)


In this article, we examined together why React is fast and scalable but how different its approach is. We went through how React handles the rendering and what the component-driven development is and how should you set up and organize your project. These are the very basics.

In the upcoming "The React way" articles we are going to dig deeper.

I still believe that the best way to learn a new programming approach is to start develop and write code.
That’s why I would like to ask you to write something awesome and also spend some time to check out the offical React website, especially the guides section. Excellent resource, the Facebook developers, and the React community did an awesome job with it.

Next up

If you liked this article, subscribe to our newsletter for more. The remaining part of the The React way post series are coming soon. We will cover topics like:

  • immutability
  • top-down rendering
  • Flux
  • isomorphic way (common app on client and server)

Feel free to check out the repository:

Update: the second part is out! Learn more about the React.js way in the second part of the series: Flux Architecture with Immutable.js.

Functional UI and Components as Higher Order Functions

This article is a guest post from Mikael Brevik, who is a speaker at JSConf Budapest on 14-15th May 2015.


Once upon a time in web development, we had perfect mental models through static HTML. We could predict the output without giving it too much thought. If we were to change any of the contents on the site, we did a full refresh and we could still mentally visualise what the output would be. We would communicate between elements on the website by a simple protocol of text and values, through attributes and children. But in time, as the web got more complex, and we started to think about them as applications, we got the need for doing relative updates without doing full page refresh. The need to change some sub-part of the view without any server-side request. We started building up state in the DOM, and we broke the static mental model. This made our applications harder to reason about. Instead of just being able to look at the code and know what it was doing, we have to try really, really hard to imagine what the built up state was at any given point.

Making web applications got harder as the systems got more and more complex, and a lot of this has to do with state. We should be able to reason about a application in a simpler way, and building complex systems by combining small pieces of components which is more focused and doesn't require us to know what is happening in other parts for the system - as with HTML.

Functions and Purity

How can we go back to the days of static mental models and just being able to read the code from top-to-bottom? We still need to do dynamic update of the view, as we want interactive and living pages that react to users, but still have the model of refreshing the entire site. To achieve this we can take a functional approach and build a idempotent system. That is, a system which given the same input it produces the same output.

Let us introduce the concept of functions with referential transparency. These are functions where we can just replace their invocations with their output values, and the system would still work as if the function was invoked. A function that is referentially transparent, is also pure. That is, a function that has no side-effect. A pure and referentially transparent function, is predictable in the sense that given an input, it always return the same output.

const timesTwo = (a) => a*2;

timesTwo(2) + timesTwo(2)  
//=> 8

2 * timesTwo(2)  
//=> 8

4 + 4  
//=> 8

The function timesTwo as seen above, is both pure and referentially transparent. We can easily switch out timesTwo(2) with the result 4 and our system would still work as before. There are no side-effects inside the function that alter the state of our application, other than its output. We have the static mental model, as we can read the contents, from top-to-bottom, and based on the input we can predict the output.

Be wary though. Sometimes you can have side-effects without knowing it. This often happens through mutation of passed in objects. Not only can you have side-effects, but you can create horizontally coupled functions which can alter each others behaviour in unexpected ways. Consider the following:

const obj = { foo: 'bar' };

const coupledOne = (input) =>  
  console.log( = 'foo');

const coupledTwo = (input) =>  
  // move to end of message queue, simulate async behaviour
  setTimeout(_ => console.log(input));

> coupledTwo(obj) // prints 'foo' !!!!!
> coupledOne(obj) // prints 'foo'

Of course, the above code sample is utterly stupid and very obvious, but something similar can happen more indirectly and is fairly common. You get passed a reference to an object, and without thinking about it, you mutate the contents of that object. Other functions can be dependent on that object and get surprising behaviour. The solution is not to mutate the input by making a copy of the input and returning of the newly created copy (treating the data as immutable).

By having our functions referentially transparent, we get predictability. We can trust our function to if it returns a result one time, it returns the same output every time - given the same input.

const timesTwo = (a) => a*2;  
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4
> timesTwo(2) //=> 4

And by having our system predictable, it is also testable. No need to build up a big state which our system relies on, we can take one function and know the contract it expects (the input), and expect the same output. No need to test the inner workings of a function, just the output. Never test how it works, just that it works.

const timesTwo = (a) => a*2;  

Composability and Higher Order Functions

But we don't get large, usable system, by just having some functions. Or do we? We can combine several smaller functions to build a complex, advanced system. If we think about it, a system is just handling data and transforming values and list of values to different values and list of values. And by having all functions transparent, we can use functions as higher order functions to compose them in different ways. Higher order functions are, as probably explained many times, just functions that can be used as input to other functions or be returned from functions. In javascript we use higher order functions every day, maybe without thinking about them as higher order functions. A callback is one example of a higher order function.

We can use higher order functions to create new functions which can be derived from one or more other higher order functions. One easy example is a Maybe function. Which can decorate a function into being null safe. Below we see a naive implementation of the maybe decorator. We won't get into the full implementation here, but you can see an example in Reginald Braithwaite's fantastic book, Allongé©.

const maybe = function (fn) {  
  return function (input) {
    if (!input) return;
    return, input);

const impl1 = input => input.toLowerCase();  
impl(void 0) // would crash

const impl2 = maybe(input => input.toLowerCase());  
impl2(void 0) // would **not** crash  

Another use of higher order functions is to take two or more functions and combining them to one. This is where our pure functions really shine. We can implement a function, compose, which takes two functions and pipes the result of one function as input into the other: Taking two different functions and create a new, derived, function as the combination of the two. Let's look at another naive implementation:

const compose = (fn1, fn2) =>  
  input => fn1(fn2(input));

// Composing two functions
const prefix = (i) => 'Some Text: ' + i;  
const shrink = (i) => i.toLowerCase();

const composed = compose(prefix, shrink);  
composed(foo) //=> 'Some Text: foo'  

The last building block we will look at is partial application. The act of deriving a function, creating a new function with some pre-set inputs. Let's say we have function taking two inputs: a and b, but we want to have a function that only takes one input, b, where the input a is set to a specific value.

const partial = (fn, a) =>  
  (b) => fn(a, b);

const greet = (greeting, name) =>  
  greeting + ', ' + b + '!';

const hello = partial(greet, 'Hello');

hello('Hank Pym') //=> 'Hello, Hank Pym!'  

And we can of course compose all the different examples into one happy function.

const shrinkedHello = maybe(compose(  
  partial(greet, 'Hello'),

shrinkedHello(void 0) // not crashing  
shrinkedHello('HANK PYM') //=> 'Hello, hank pym!'  

Now we got a basic understanding of how to combine small building blocks to get functions that do more complex stuff. As each and every "primitive" function we have is pure and referentially transparent, our derived functions will be as well. This means our system will be idempotent. However, there is one thing we are missing: communication with the DOM.

The DOM is a Side-effect

We want our system to output something other than to the console. Our application should show pretty boxes with useful information in them. We're not able to do that without interacting with the DOM (or some other output end-point). Before we move on, it's one important thing to remember: The DOM is a huge side-effect and a massive bundle of state. Consider the following code, which is similar to the example of tight coupling of functions through objects from earlier:

dom('#foo').innerHTML = 'bar'  
const coupledOne = (input) =>  
  input.innerText = 'foo';

const coupledTwo = (input) =>  
  setTimeout(_ =>

coupledTwo(dom('#foo')) //=> 'foo' !!!!!  
coupledOne(dom('#foo')) //=> 'foo'  

We need to treat the DOM as the integration point it is. As with any other integration point, we want to handle it at the far edges of our data flow. Just to represent the output of our system, not use it as our blob of state. Instead of letting our functions handle the interaction with the DOM, we do that somewhere else. Look at the following example/pseudo code:

const myComp = i => <h1>{i}</h1>;  
const myCompTwo = i => <h2>{myComp(i)}</h2>;

const output = myComp('Hank Pym');  
const newOutput = output + myComp('Ant-Man');

// Persist to the DOM somewhere

A Virtual DOM, like the one React has, is a way to allow us to abstract away the integration with the DOM. Moreover, it allows us to do a dynamic page refresh, semantically just like static HTML, but without the browser actually doing the refresh (and doing it performant with diff-ing between the changes and only actually interacting with the DOM when necessary).

const myComp = i => <h1>{i}</h1>;  
const myCompTwo = i => <h2>{myComp(i)}</h2>;

const output = myComp('Hank Pym');


const newOutput = output + myComp('Ant-Man');

// only update the second output

What we've seen in the two last examples aren't "normal" functions, they are view components. Functions which returns a view representation to be passed to a Virtual DOM.

Higher Order Components

Everything we've seen about functions is also true for components. We can build complex views by combining many small, lesser complex, components. We also get the static mental model of pure and referentially transparent functions but with views. We get the same reasoning as we had in the good old days with HTML, but instead of just communicating with simple strings and values, we can communicate with more complex objects and metadata. But the communication can still work as with HTML, where as the information is passed from the top.

Referentially transparent components, will give us predictable views and this means testable views.

const myComp = component(input => <h1>{input}</h1>);

expect(renderToString(myComp('Hank Pym')).to.equal('<h1>Hank Pym</h1>')  
expect(renderToString(myComp('Sam Wilson')).to.equal('<h1>Sam Wilson</h1>')  

We can use combinators (functions which operate on higher order functions and combine behavior) like map, which is a fairly common pattern in React. This would work exactly as you'd expect. Where we can transform a list of data into a list of components representing that data.

const listItem = component(i => <li>{i}</li>);

const output = ['Wade', 'Hank', 'Cable'].map(listItem);  
// output is now list of names

The components created in this example is made using a library, called Omniscient.js, which adds syntactic sugar on top of React components for encouraging referentially transparent components. Documentation of the library can be seen on the homepage

These kind of components can also be composed in different ways. For instance we can communicate in a nested structure, where the components are passed as children.

const myComp = component(input => <h1>{input}</h1>);  
const myCompTwo = component(input => <div>{myComp(input)}</div>);

const output = myCompTwo('Hank Pym');  

Here we define myComp as a explicit child of myCompTwo. But this way would hard bind myCompTwo to myComp and you wouldn't be able to use myCompTwo without the other. We can borrow concepts of our previously defined combinators (i.e. compose) to derive a component which would leave both myComp and myCompTwo usable without each other.

const h1 = component(i => <h1>{i}</h1>);  
const em = component(i => <em>{i}</em>);

const italicH1 = compose(h1, em);  
var output = italicH1('Wade Wilson');  

In the example above, we create the derived component italicH1 which has the composed behaviour of both h1 and em, but we can still use both h1 and em independently. This is just like we saw previously with pure functions. We can't use the exact same implementation of compose as before, but we can do a similar approach. A straightforward implementation could be something like the following:

function compose (...fns) {  
  return (...args) =>
    fns.reduceRight((child, fn) =>
        child ? args.concat(child) : args),

This function takes all passed components and, from the right, reduces to pass all accumulated children until there are no more components to accumulate.

We can also borrow the concept of partial applications to derive new components. As an example, imagine we have a header element which can take options to define a class name and title text passed as a child. If we want to use that component several times throughout our system, we wouldn't want to pass in the class name as a string everywhere, but rather create a component that is a type of component which has that class name. So we could create a header one element that is underlinedH1.

const comp = component(({children, className}) =>  
  <h1 className={className}>{children}</h1>

const underlinedH1 = partial(comp, {  
  className: 'underline-title'
var output = underlinedH1('Hank');  

We derive a component which always returns an underlined header. The code for implementing partial applications is a bit more complicated and can be seen as a gist. Following the functional pattern further, we can also do something like the maybe decorator with components as well:

const maybe = function (fn) {  
  return (input) => {
    if (!input) return <span />;
    return fn(input);

const comp = maybe(component(({children}) => <h1>{children}</h1>));  

We can combine the different transformation functions, partial applications and components as we did with functions.

const greet = component(({greeting, children}) =>  
  <h1>{greeting}, {children}!</h1>

const shrinkedHello = maybe(compose(  
  partial(greet, 'Hello'),


In this post we've seen how we can use functional programming to make systems that is much easier to reason about, and how to get systems that have a static mental model, much like we had with the good old HTML. Instead of just communicating with attributes and values, we can have a protocol with more complex objects where we can even pass down functions or something like event emitters.

We've also seen how we can use the same principles and building blocks to make predictable and testable views, where we always have the same output given the input. This makes our application more robust and we get a clear separation of concern. This is a product of having multiple smaller components which we can re-use in different settings, both directly and in derived forms.

Although, the examples shown in this blog post uses Virtual DOM and React, the concepts are sound even without that implementation, and is something you could think about when building your views.

Disclaimer: This is an ongoing experiment and some of the concepts of combinators on higher order components aren't too well tested and is more of a conceptual thought than actual perfect implementations. The code works conceptually and with basic implementations, but hasn't been used excessively.

See more on Omniscient.js and referentially transparent on the project homepage or feel free to ask questions using issues.

This article is a guest post from Mikael Brevik, who is a speaker at JSConf Budapest on 14-15th May 2015.

Flux inspired libraries with React

There are lots of flux or flux inspired libraries out there: they try to solve different kind of problems, but which one should you use? This article tries to give an overview on the different approaches.

What is Flux? (the original)

An application architecture for React utilizing a unidirectional data flow. - flux

Ok, but why?

Flux tries to avoid the complex cross dependencies between your modules (MVC for example) and realize a simple one-way data flow. This helps you to write scalable applications and avoid side effects in your application.

Read more about this and about the key properties of Flux architecture at Fluxxor's documentation.

Original flux

Facebook's original Flux has four main components:
singleton Dispatcher, Stores, Actions and Views (or controller-view)


From the Flux overview:

The dispatcher is the central hub that manages all data flow in a Flux application.

In details:

It is essentially a registry of callbacks into the stores.
Each store registers itself and provides a callback. When the dispatcher responds to an action, all stores in the application are sent the data payload provided by the action via the callbacks in the registry.


Actions can have a type and a payload. They can be triggered by the Views or by the Server (external source). Actions trigger Store updates.

Facts about Actions:

  • Actions should be descriptive:

    The action (and the component generating the action) doesn't know how to perform the update, but describes what it wants to happen. - Semantic Actions

  • But shouldn't perform an other Action: No Cascading Actions

  • About Actions dispatches

    Action dispatches and their handlers inside the stores are synchronous. All asynchronous operations should trigger an action dispatch that tells the system about the result of the operation - Enforced Synchrony

Later you will see that Actions can be implemented and used in different ways.


Stores contain the application state and logic.

Every Store receives every action from the Dispatcher but a single store handles only the specified events. For example, the User store handles only user specific actions like createUser and avoid the other actions.

After the store handled the Action and it's updated, the Store broadcasts a change event. This event will be received by the Views.

Store shouldn't be updated externally, the update of the Store should be triggered internally as a response to an Action: Inversion of Control.


Views are subscribed for one or multiple Stores and handle the store change event.
When a store change event is received, the view will get the data from the Store via the Store's getter functions. Then the View will render with the new data.

1. Store change event received
2. Get data from the Store via getters
3. Render view

FB Flux

You can find several Flux implementations on GitHub, the most popular libraries are the followings:

Beyond Flux

Lots of people think that Flux could be more reactive and I can just agree with them.
Flux is a unidirectional data flow which is very similar to the event streams.

Now let's see some other ways to have something Flux-like but being functional reactive at the same time.


Reflux has refactored Flux to be a bit more dynamic and be more Functional Reactive Programming (FRP) friendly - refluxjs

Reflux is a more reactive Flux implementation by @spoike because he found the original one confusing and broken at some points: Deconstructing ReactJS's Flux

The biggest difference between Flux and Reflux is that there is no centralized dispatcher.

Actions are functions which can pass payload at call. Actions are listenable and Stores can subscribe for them. In Reflux every action act as dispatcher.


Reflux provides mixins for React to listen on stores changes easily.
It provides support for async and sync actions as well. It's also easy to handle async errors with Reflux.

In Reflux, stores can listen to other stores in serial and parallel way which sounds useful but it increases the cross dependencies between your stores. I'm afraid you can easily find yourself in a middle of circular dependency.

A problem arises if we create circular dependencies. If Store A waits for Store B, and B waits for A, then we'll have a very bad situation on our hands. - flux


There is a circular dependency check for some cases in reflux implemented and is usually not an issue as long as you think of data flows with actions as initiators of data flows and stores as transformations. - @spoike


The Flux architecture allows you to think your application as an unidirectional flow of data, this module aims to facilitate the use of RxJS Observable as basis for defining the relations between the different entities composing your application. - rx-flux

rx-flux is a newcomer and uses RxJS, the reactive extension to implement a unidirectional data flow.

rx-flux is more similar to Reflux than to the original Flux (from the readme):

  • A store is an RxJS Observable that holds a value
  • An action is a function and an RxJS Observable
  • A store subscribes to an action and updates accordingly its value.
  • There is no central dispatcher.

When the Stores and Actions are RxJS Observables you can use the power of Rx to handle your application business logic in a Functional Reactive way which can be extremely useful in asynchronous situations.

If you don't like Rx, there are similar projects with Bacon.js like fluxstream or react-bacon-flux-poc.

If you like the concept of FRP, I recommend you to read @niklasvh's article about how he combined Immutable.js and Bacon.js to have a more reactive Flux implementation: Flux inspired reactive data flow using React and Bacon.js
niklasvh's example code for lazy people: flux-todomvc


Omniscient is a really different approach compared to Flux. It uses the power of the Facebook's Immutable.js to speed up the rendering. It renders only when the data is really changed. This kind of optimized call of the render function (React) can help us to build performant web applications.

Rendering is already optimized in React with the concept of Virtual DOM, but it checks the DOM diffs what is also computation heavy. With Omniscient you can reduce the React calls and avoid the diff calculations.

What? / Example:
Imagine the following scenario: the user's name is changed, what will happen in Flux and what in Omniscient?
In Flux every user related view component will be re-rendered because they are subscribed to the user Store which one broadcasts a change event.
In Omniscient, only components will be re-rendered which are using the user's name cursor.
Of course it's possible to diverse Flux with multiple Stores, but most of the cases it doesn't make any sense to store the name in a different store.

Omniscient is for React, but actually it's just a helper for React and the real power comes from Immstruct what can be used without Omniscient with other libraries like virtual-dom.

It may not be trivial what Omniscient does. I think this todo example can help the most.

You can find a more complex demo here: demo-reactions

It would be interesting to hear what companies are using Omniscient in production.
If you do so, I would love to hear from you!

Further reading

The State of Flux
Flux inspired reactive data flow using React and Bacon.js
Deconstructing ReactJS's Flux
React + RxJS + Angular 2.0's di.js TodoMVC Example by @joelhooks

From AngularJS to React: The Isomorphic Way

Last week we were working on making our website indexable for search engines. This is the story of rewriting it and the summary of what we have learnt.


Two months ago when we created we had to decide what kind of technologies we wanted to use on our website. We only had a few static pages with some event tracking. So it was very simple, but we wanted to keep it scalable and as fast as possible.
Our team is quite experienced in AngularJS so it seemed reasonable to choose Angular on the frontend side.

Please note, that this article is not about why React or AngularJS is better. It always depends on your use case.

The "Angular way"

AngularJS is a pretty cool framework by Google - it provides many great features like routing and two-way data binding to supercharge your development and create testable applications.

Angular helps creating single page applications and renders the content on the client-side - but search robots without JavaScript support cannot index your content.
It can be a serious problem from a SEO point of view. Especially when you want to make your freshly founded Node.js company well known :)

Angular site without JavaScript Our Angular site without JavaScript


At RisingStack, we do not like half measures, and we wanted to fix this - this is when came into the picture. It is an external service (also an open source project) that renders your site on an external server with a headless browser and sends the result back in HTML.
It makes your site readable for most search engines but also breaks your AngularJS bindings so you cannot use it for real human users.

Because our site uses Koa, the generator based Node.js framework which is not supported by, we had to implement it ourself.
So RisingStack released a koa-prerender middleware for Koa.
In a nutshell: it detects the crawlers from the request parameters (_escaped_fragment_, user-agent etc.), then calls the external prerender service and responds with the static HTML content.

We were happy because our site was finally reachable for most search engines like Google and Yahoo but still not for all. Also the user-agents can change, and we do not want to maintain it. We kept looking for a better solution.

Our Angular site without JavaScript Our Angular site without JavaScript but with Koa-prerender

Isomorphic JavaScript

We wanted to have something that renders our content on the server-side at the first load, but provides the experience of the SPA applications after that.
We needed something that can render both on the client and server side and share the application state between the two sides. So the client should continue from the point where the server finished its job.
To implement this kind of architecture the code base has to be common on the server and client side (Browserify/Webpack) and the application also has to be able to render on both sides.

"Browserify lets you require('modules') in the browser by bundling up all of your dependencies." -

This means practically that you can use the Node.js dependency system and npm packages in the browser. For example: superagent for AJAX requests, async for better flow control, etc.

Isomorphic JavaScript architecture Isomorphic JavaScript architecture, Source: AirBnb Nerds

If you would like to read more about isomorphic applications don't miss AirBnb's article: Isomorphic JavaScript: The Future of Web Apps.


"A JavaScript library for building user interfaces." - React

React provides high performance client and server side rendering with a one-way flow for data binding. ReactJS is open source and built by the Facebook Engineering team.

Because React is not a framework you should extend it with other solutions like the Flux application architecture by Facebook.

About Flux
"Flux eschews MVC in favor of a unidirectional data flow. When a user interacts with a React view, the view propagates an action through a central dispatcher, to the various stores that hold the application's data and business logic, which updates all of the views that are affected. This works especially well with React's declarative programming style, which allows the store to send updates without specifying how to transition views between states." - Flux docs

Flux architecture The flux architecture, source:

React + Flux + Koa = isomorphic goodness

After we have decided that we will create our isomorphic application with React and Flux, we started to look for ideas, samples from others.
Finally we started to build our site based on Yahoo's flux-examples.

Their flux-examples provides sample code for two Node.js isomorphic applications with routing and Express.

The idea behind the example is very simple, it writes Javascript code that is runnable both on the server and on the client side using Webpack (we changed Webpack to Browserify).

The main concept behind the isomorphic achitecture is the following:
The application state and code is shared between your browser and the server.
After the server has received the request it creates a new flux-react application instance and renders the view then passes the state of the storages (app) into the rendered HTML output: <script>var STATE = ...</script>. The server responds with this rendered file.

The browser loads the same code (built with Browserify/Webpack) and bootstraps the application from the shared state. (shared by the server and injected into the global/window scope). This means that our application can continue from the point where the server has finished.

The user gets a fully rendered site at the first load like in the old times, but also able to continue the surfing with a super fast SPA application.
Because the site content is viewable without JavaScript, the search engines can index it.

( uses Koa, so we had to migrate some middlewares which we are going to publish soon in the RisingStack GitHub repository.)

React site Our React site without JavaScript


The biggest win for us here is that we finally have an indexable isomorphic SPA application. It wasn't our top priority, but now our site can work without JavaScript for the human users too.

Still, the point of this post is not that React is superior to AngularJS - only that React is better in some cases and vica versa. It always depends on your use-case.

They can also live in symbiosis, a good example for this is the ngReactGrid project.

That's it for now, we are very excited about what will bring the isomorphic era for the web development and Node.js.

If you have something similar, it would be great to hear your story. Ping us on our Twitter channel: @RisingStack

Just published a full isomorphic example:

Need help in developing your application?

RisingStack provides JavaScript development and consulting services - ping us if you need a helping hand!