Expert Node.js Support
Learn more

es6

Writing a JavaScript Framework - Data Binding with ES6 Proxies

Writing a JavaScript Framework - Data Binding with ES6 Proxies

This is the fifth chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain how to create a simple, yet powerful data binding library with the new ES6 Proxies.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies (current chapter)
  6. Custom elements
  7. Client-side routing

Prerequisites

ES6 made JavaScript a lot more elegant, but the bulk of new features are just syntactic sugar. Proxies are one of the few non polyfillable additions. If you are not familiar with them, please take a quick look at the MDN Proxy docs before going on.

"#ES6 made #JavaScript a lot more elegant. Proxies are one of the few non polyfillable additions." via @RisingStack

Click To Tweet

Having a basic knowledge of the ES6 Reflection API and Set, Map and WeakMap objects will also be helpful.

The nx-observe library

nx-observe is a data binding solution in under 140 lines of code. It exposes the observable(obj) and observe(fn) functions, which are used to create observable objects and observer functions. An observer function automatically executes when an observable property used by it changes. The example below demonstrates this.

// this is an observable object
const person = observable({name: 'John', age: 20})

function print () {  
  console.log(`${person.name}, ${person.age}`)
}

// this creates an observer function
// outputs 'John, 20' to the console
observe(print)

// outputs 'Dave, 20' to the console
setTimeout(() => person.name = 'Dave', 100)

// outputs 'Dave, 22' to the console
setTimeout(() => person.age = 22, 200)  

The print function passed to observe() reruns every time person.name or person.age changes. print is called an observer function.

If you are interested in a few more examples, please check the GitHub readme or the NX home page for a more lifelike scenario.

Implementing a simple observable

In this section, I am going to explain what happens under the hood of nx-observe. First, I will show you how changes to an observable's properties are detected and paired with observers. Then I will explain a way to run the observer functions triggered by these changes.

Registering changes

Changes are registered by wrapping observable objects into ES6 Proxies. These proxies seamlessly intercept get and set operations with the help of the Reflection API.

The variables currentObserver and queueObserver() are used in the code below, but will only be explained in the next section. For now, it is enough to know that currentObserver always points to the currently executing observer function, and queueObserver() is a function that queues an observer to be executed soon.

/* maps observable properties to a Set of
observer functions, which use the property */  
const observers = new WeakMap()

/* points to the currently running 
observer function, can be undefined */  
let currentObserver

/* transforms an object into an observable 
by wrapping it into a proxy, it also adds a blank  
Map for property-observer pairs to be saved later */  
function observable (obj) {  
  observers.set(obj, new Map())
  return new Proxy(obj, {get, set})
}

/* this trap intercepts get operations,
it does nothing if no observer is executing  
at the moment */  
function get (target, key, receiver) {  
  const result = Reflect.get(target, key, receiver)
   if (currentObserver) {
     registerObserver(target, key, currentObserver)
   }
  return result
}

/* if an observer function is running currently,
this function pairs the observer function  
with the currently fetched observable property  
and saves them into the observers Map */  
function registerObserver (target, key, observer) {  
  let observersForKey = observers.get(target).get(key)
  if (!observersForKey) {
    observersForKey = new Set()
    observers.get(target).set(key, observersForKey)
  }
  observersForKey.add(observer)
}

/* this trap intercepts set operations,
it queues every observer associated with the  
currently set property to be executed later */  
function set (target, key, value, receiver) {  
  const observersForKey = observers.get(target).get(key)
  if (observersForKey) {
    observersForKey.forEach(queueObserver)
  }
  return Reflect.set(target, key, value, receiver)
}

The get trap does nothing if currentObserver is not set. Otherwise, it pairs the fetched observable property and the currently running observer and saves them into the observers WeakMap. Observers are saved into a Set per observable property. This ensures that there are no duplicates.

The set trap is retrieving all the observers paired with the modified observable property and queues them for later execution.

You can find a figure and a step-by-step description explaining the nx-observe example code below.

JavaScript data binding with es6 proxy - observable code sample

  1. The person observable object is created.
  2. currentObserver is set to print.
  3. print starts executing.
  4. person.name is retrieved inside print.
  5. The proxy get trap on person is invoked.
  6. The observer Set belonging to the (person, name) pair is retrieved by observers.get(person).get('name').
  7. currentObserver (print) is added to the observer Set.
  8. Step 4-7 are executed again with person.age.
  9. ${person.name}, ${person.age} is printed to the console.
  10. print finishes executing.
  11. currentObserver is set to undefined.
  12. Some other code starts running.
  13. person.age is set to a new value (22).
  14. The proxy set trap on person is invoked.
  15. The observer Set belonging to the (person, age) pair is retrieved by observers.get(person).get('age').
  16. Observers in the observer Set (including print) are queued for execution.
  17. print executes again.

Running the observers

Queued observers run asynchronously in one batch, which results in superior performance. During registration, the observers are synchronously added to the queuedObservers Set. A Set cannot contain duplicates, so enqueuing the same observer multiple times won't result in multiple executions. If the Set was empty before, a new task is scheduled to iterate and execute all the queued observers after some time.

/* contains the triggered observer functions,
which should run soon */  
const queuedObservers = new Set()

/* points to the currently running observer,
it can be undefined */  
let currentObserver

/* the exposed observe function */
function observe (fn) {  
  queueObserver(fn)
}

/* adds the observer to the queue and 
ensures that the queue will be executed soon */  
function queueObserver (observer) {  
  if (queuedObservers.size === 0) {
    Promise.resolve().then(runObservers)
  }
  queuedObservers.add(observer)
}

/* runs the queued observers,
currentObserver is set to undefined in the end */  
function runObservers () {  
  try {
    queuedObservers.forEach(runObserver)
  } finally {
    currentObserver = undefined
    queuedObservers.clear()
  }
}

/* sets the global currentObserver to observer, 
then executes it */  
function runObserver (observer) {  
  currentObserver = observer
  observer()
}

The code above ensures that whenever an observer is executing, the global currentObserver variable points to it. Setting currentObserver 'switches' the get traps on, to listen and pair currentObserver with all the observable properties it uses while executing.

Building a dynamic observable tree

So far our model works nicely with single level data structures but requires us to wrap every new object-valued property in an observable by hand. For example, the code below would not work as expected.

const person = observable({data: {name: 'John'}})

function print () {  
  console.log(person.data.name)
}

// outputs 'John' to the console
observe(print)

// does nothing
setTimeout(() => person.data.name = 'Dave', 100)  

In order to make this code work, we would have to replace observable({data: {name: 'John'}}) with observable({data: observable({name: 'John'})}). Fortunately we can eliminate this inconvenience by modifying the get trap a little bit.

function get (target, key, receiver) {  
  const result = Reflect.get(target, key, receiver)
  if (currentObserver) {
    registerObserver(target, key, currentObserver)
    if (typeof result === 'object') {
      const observableResult = observable(result)
      Reflect.set(target, key, observableResult, receiver)
      return observableResult
    }
  }
  return result
}

The get trap above wraps the returned value into an observable proxy before returning it - in case it is an object. This is perfect from a performance point of view too, since observables are only created when they are really needed by an observer.

Comparison with an ES5 technique

A very similar data binding technique can be implemented with ES5 property accessors (getter/setter) instead of ES6 Proxies. Many popular libraries use this technique, for example MobX and Vue. Using proxies over accessors has two main advantages and a major disadvantage.

Expando properties

Expando properties are dynamically added properties in JavaScript. The ES5 technique does not support expando properties since accessors have to be predefined per property to be able to intercept operations. This is a technical reason why central stores with a predefined set of keys are trending nowadays.

On the other hand, the Proxy technique does support expando properties, since proxies are defined per object and they intercept operations for every property of the object.

A typical example where expando properties are crucial is with using arrays. JavaScript arrays are pretty much useless without the ability to add or remove items from them. ES5 data binding techniques usually hack around this problem by providing custom or overwritten Array methods.

Getters and setters

Libraries using the ES5 method provide 'computed' bound properties by some special syntax. These properties have their native equivalents, namely getters and setters. However the ES5 method uses getters/setters internally to set up the data binding logic, so it can not work with property accessors.

Proxies intercept every kind of property access and mutation, including getters and setters, so this does not pose a problem for the ES6 method.

The disadvantage

The big disadvantage of using Proxies is browser support. They are only supported in the most recent browsers and the best parts of the Proxy API are non polyfillable.

A few notes

The data binding method introduced here is a working one, but I made some simplifications to make it digestible. You can find a few notes below about the topics I left out because of this simplification.

Cleaning up

Memory leaks are nasty. The code introduced here avoids them in a sense, as it uses a WeakMap to save the observers. This means that the observers associated with an observable are garbage collected together with the observable.

However, a possible use case could be a central, durable store with a frequently shifting DOM around it. In this case, DOM nodes should release all of their registered observers before they are garbage collected. This functionality is left out of the example, but you can check how the unobserve() function is implemented in the nx-observe code.

Double wrapping with Proxies

Proxies are transparent, meaning there is no native way of determining if something is a Proxy or a plain object. Moreover, they can be nested infinitely, so without necessary precaution, we might end up wrapping an observable again and again.

There are many clever ways to make a Proxy distinguishable from normal objects, but I left it out of the example. One way would be to add a Proxy to a WeakSet named proxies and check for inclusion later. If you are interested in how nx-observe implements the isObservable() method, please check the code.

Inheritance

nx-observe also works with prototypal inheritance. The example below demonstrates what does this mean exactly.

const parent = observable({greeting: 'Hello'})  
const child = observable({subject: 'World!'})  
Object.setPrototypeOf(child, parent)

function print () {  
  console.log(`${child.greeting} ${child.subject}`)
}

// outputs 'Hello World!' to the console
observe(print)

// outputs 'Hello There!' to the console
setTimeout(() => child.subject = 'There!')

// outputs 'Hey There!' to the console
setTimeout(() => parent.greeting = 'Hey', 100)

// outputs 'Look There!' to the console
setTimeout(() => child.greeting = 'Look', 200)  

The get operation is invoked for every member of the prototype chain until the property is found, so the observers are registered everywhere they could be needed.

There are some edge cases caused by the little-known fact that set operations also walk the prototype chain (quite sneakily), but these won't be covered here.

Internal properties

Proxies also intercept 'internal property access'. Your code probably uses many internal properties that you usually don't even think about. Some keys for such properties are the well-known Symbols for example. Properties like these are usually correctly intercepted by Proxies, but there are a few buggy cases.

Asynchronous nature

The observers could be run synchronously when the set operation is intercepted. This would provide several advantages like less complexity, predictable timing and nicer stack traces, but it would also cause a big mess for certain scenarios.

Imagine pushing 1000 items to an observable array in a single loop. The array length would change a 1000 times and the observers associated with it would also execute a 1000 times in quick succession. This means running the exact same set of functions a 1000 times, which is rarely a useful thing.

Another problematic scenario would be two-way observations. The below code would start an infinite cycle if observers ran synchronously.

const observable1 = observable({prop: 'value1'})  
const observable2 = observable({prop: 'value2'})

observe(() => observable1.prop = observable2.prop)  
observe(() => observable2.prop = observable1.prop)  

For these reasons nx-observe queues observers without duplicates and executes them in one batch as a microtask to avoid FOUC. If you are unfamiliar with the concept of a microtask, please check my previous article about timing in the browser.

Data binding with ES6 Proxies - the Conclusion

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository and the nx-observe source code in this Github repository.

I hope you found this a good read, see you next time when weI’ll discuss custom HTML Elements!

If you have any thoughts on the topic, please share them in the comments.


The Evolution of Asynchronous JavaScript

asynchronous-javascript-cover

The async functions are just around the corner - but the journey to here was quite long. Not too long ago we just wrote callbacks, then the Promise/A+ specification emerged followed by generator functions and now the async functions.

Let's take a look back and see how asynchronous JavaScript evolved over the years.

Callbacks

It all started with the callbacks.

Asynchronous JavaScript

Asynchronous programming, as we know now in JavaScript, can only be achieved with functions being first-class citizens of the language: they can be passed around like any other variable to other functions. This is how callbacks were born: if you pass a function to another function (a.k.a. higher order function) as a parameter, within the function you can call it when you are finished with your job. No return values, only calling another function with the values.

Something.save(function(err) {  
  if (err)  {
    //error handling
    return;
  }
  console.log('success');
});

These so called error-first callbacks are in the heart of Node.js itself - the core modules are using it as well as most of the modules found on NPM.

The challenges with callbacks:

  • it is easy to build callback hells or spaghetti code with them if not used properly
  • error handling is easy to miss
  • can't return values with the return statement, nor can use the throw keyword

Mostly because of these points the JavaScript world started to look for solutions that can make asynchronous JavaScript development easier.

One of the answers was the async module. If you worked a lot with callbacks, you know how complicated it can get to run things in parallel, sequentially or even mapping arrays using asynchronous functions. Then the async module was born thanks to Caolan McMahon.

With async, you can easily do things like:

async.map([1, 2, 3], AsyncSquaringLibrary.square,  
  function(err, result){
  // result will be [1, 4, 9]
});

Still, it is not that easy to read nor to write - so comes the Promises.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Promises

The current JavaScript Promise specifications date back to 2012 and available from ES6 - however Promises were not invented by the JavaScript community. The term comes from Daniel P. Friedman from 1976.

A promise represents the eventual result of an asynchronous operation.

The previous example with Promises may look like this:

Something.save()  
  .then(function() {
    console.log('success');
  })
  .catch(function() {
    //error handling
  })

You can notice that of course Promises utilize callbacks as well. Both the then and the catch registers callbacks that will be invoked with either the result of the asynchronous operation or with the reason why it could not be fulfilled. Another great thing of Promises is that they can be chained:

saveSomething()  
  .then(updateOtherthing)
  .then(deleteStuff)  
  .then(logResults);

When using Promises you may have to use polyfills in runtimes that don't have it yet. A popular choice in these cases is to use bluebird. These libraries may provide a lot more functionality than the native one - even in these cases limit yourself to the features provided by Promises/A+ specifications.

For more information on Promises, refer to the Promises/A+ specification.

You may ask: how can I use Promises when most of the libraries out there exposes a callback interfaces only?

Well, it is pretty easy - the only thing that you have to do is wrapping the callback the original function call with a Promise, like this:

function saveToTheDb(value) {  
  return new Promise(function(resolve, reject) {
    db.values.insert(value, function(err, user) { // remember error first ;)
      if (err) {
        return reject(err); // don't forget to return here
      }
      resolve(user);
    })
  }
}

Some libraries/frameworks out there already support both, providing a callback and a Promise interface at the same time. If you build a library today, it is a good practice to support both. You can easily do so with something like this:

function foo(cb) {  
  if (cb) {
    return cb();
  }
  return new Promise(function (resolve, reject) {

  });
}

Or even simpler, you can choose to start with a Promise-only interface and provide backward compatibility with tools like callbackify. Callbackify basically does the same thing that the previous code snippet shows, but in a more general way.

Generators / yield

JavaScript Generators is a relatively new concept, they were introduced in ES6 (also known as ES2015).

Wouldn't it be nice, that when you execute your function, you could pause it at any point, calculate something else, do other things, and then return to it, even with some value and continue?

This is exactly what generator functions do for you. When we call a generator function it doesn't start running, we will have to iterate through it manually.

function* foo () {  
  var index = 0;
  while (index < 2) {
    yield index++;
  }
}
var bar =  foo();

console.log(bar.next());    // { value: 0, done: false }  
console.log(bar.next());    // { value: 1, done: false }  
console.log(bar.next());    // { value: undefined, done: true }  

If you want to use generators easily for writing asynchronous JavaScript, you will need co as well.

Co is a generator based control flow goodness for Node.js and the browser, using promises, letting you write non-blocking code in a nice-ish way.

With co, our previous examples may look something like this:

co(function* (){  
  yield Something.save();
}).then(function() {
  // success
})
.catch(function(err) {
  //error handling
});

You may ask: what about operations running in parallel? The answer is simpler than you may think (under the hoods it is just a Promise.all):

yield [Something.save(), Otherthing.save()];  

Async / await

Async functions were introduced in ES7 - and currently only available using a transpiler like babel. (disclaimer: now we are talking about the async keyword, not the async package)

In short, with the async keyword we can do what we are doing with the combination of co and generators - except the hacking.

denicola-yield-await-asynchronous-javascript

Under the hood async functions using Promises - this is why the async function will return with a Promise.

So if we want to do the same thing as in the previous examples, we may have to rewrite our snippet to the following:

async function save(Something) {  
  try {
    await Something.save()
  } catch (ex) {
    //error handling
  }
  console.log('success');
} 

As you can see to use an async function you have to put the async keyword before the function declaration. After that, you can use the await keyword inside your newly created async function.

Running things in parallel with async functions is pretty similar to the yield approach - except now the Promise.all is not hidden, but you have to call it:

async function save(Something) {  
  await Promise.all[Something.save(), Otherthing.save()]
} 

Koa already supports async functions, so you can try them out today using babel.

import koa from koa;  
let app = koa();

app.experimental = true;

app.use(async function (){  
  this.body = await Promise.resolve('Hello Reader!')
})

app.listen(3000);  

Further reading

Currently we are using Hapi with generators in production in most of our new projects - alongside with Koa as well.

Which one do you prefer? Why? I would love to hear your comments!

The React.js Way: Getting Started Tutorial

Update: the second part is out! Learn more about the React.js way in the second part of the series: Flux Architecture with Immutable.js.

Now that the popularity of React.js is growing blazing fast and lots of interesting stuff are coming, my friends and colleagues started asking me more about how they can start with React and how they should think in the React way.

React.js Tutorial Google Trends (Google search trends for React in programming category, Initial public release: v0.3.0, May 29, 2013)

However, React is not a framework; there are concepts, libraries and principles that turn it into a fast, compact and beautiful way to program your app on the client and server side as well.

In this two-part blog series React.js tutorial I am going to explain these concepts and give a recommendation on what to use and how. We will cover ideas and technologies like:

  • ES6 React
  • virtual DOM
  • Component-driven development
  • Immutability
  • Top-down rendering
  • Rendering path and optimization
  • Common tools/libs for bundling, ES6, request making, debugging, routing, etc.
  • Isomorphic React

And yes, we will write code. I would like to make it as practical as possible.
All the snippets and post related code are available in the RisingStack GitHub repository .

This article is the first from those two. Let's jump in!

Repository:
https://github.com/risingstack/react-way-getting-started

1. Getting Started with the React.js Tutorial

If you are already familiar with React and you understand the basics, like the concept of virtual DOM and thinking in components, then this React.js tutorial is probably not for you. We will discuss intermediate topics in the upcoming parts of this series. It will be fun, I recommend you to check back later.

Is React a framework?

In a nutshell: no, it's not.
Then what the hell is it and why everybody is so keen to start using it?

React is the "View" in the application, a fast one. It also provides different ways to organize your templates and gets you think in components. In a React application, you should break down your site, page or feature into smaller pieces of components. It means that your site will be built by the combination of different components. These components are also built on the top of other components and so on. When a problem becomes challenging, you can break it down into smaller ones and solve it there. You can also reuse it somewhere else later. Think of it like the bricks of Lego. We will discuss component-driven development more deeply in this article later.

React also has this virtual DOM thing, what makes the rendering super fast but still keeps it easily understandable and controllable at the same time. You can combine this with the idea of components and have the power of top-down rendering. We will cover this topic in the second article.

Ok I admit, I still didn't answer the question. We have components and fast rendering - but why is it a game changer? Because React is mainly a concept and a library just secondly.
There are already several libraries following these ideas - doing it faster or slower - but slightly different. Like every programming concept, React has it’s own solutions, tools and libraries turning it into an ecosystem. In this ecosystem, you have to pick your own tools and build your own ~framework. I know it sounds scary but believe me, you already know most of these tools, we will just connect them to each other and later you will be very surprised how easy it is. For example for dependencies we won't use any magic, rather Node's require and npm. For the pub-sub, we will use Node's EventEmitter and as so on.

(Facebook announced Relay their framework for React at the React.js Conf in January 2015. But it's not available yet. The date of the first public release is unknown.)

Are you excited already? Let's dig in!

The Virtual DOM concept in a nutshell

To track down model changes and apply them on the DOM (alias rendering) we have to be aware of two important things:

  1. when data has changed,
  2. which DOM element(s) to be updated.

For the change detection (1) React uses an observer model instead of dirty checking (continuous model checking for changes). That’s why it doesn't have to calculate what is changed, it knows immediately. It reduces the calculations and make the app smoother. But the really cool idea here is how it manages the DOM manipulations:

For the DOM changing challenge (2) React builds the tree representation of the DOM in the memory and calculates which DOM element should change. DOM manipulation is heavy, and we would like to keep it at the minimum. Luckily, React tries to keep as much DOM elements untouched as possible. Given the less DOM manipulation can be calculated faster based on the object representation, the costs of the DOM changes are reduced nicely.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant backends for React applications using Trace
Start my free trial

Since React's diffing algorithm uses the tree representation of the DOM and re-calculates all subtrees when its’ parent got modified (marked as dirty), you should be aware of your model changes, because the whole subtree will be re-rendered then.
Don't be sad, later we will optimize this behavior together. (spoiler: with shouldComponentUpdate() and ImmutableJS)

React.js Tutorial React re-render (source: React’s diffing algorithm - Christopher Chedeau)

How to render on the server too?

Given the fact, that this kind of DOM representation uses fake DOM, it's possible to render the HTML output on the server side as well (without JSDom, PhantomJS etc.). React is also smart enough to recognize that the markup is already there (from the server) and will add only the event handlers on the client side.

Interesting: React's rendered HTML markup contains data-reactid attributes, which helps React tracking DOM nodes.

Useful links, other virtual DOM libraries

Component-driven development

It was one of the most difficult parts for me to pick up when I was learning React.
In the component-driven development, you won't see the whole site in one template.
In the beginning you will probably think that it sucks. But I'm pretty sure that later you will recognize the power of thinking in smaller pieces and work with less responsibility. It makes things easier to understand, to maintain and to cover with tests.

How should I imagine it?

Check out the picture below. This is a possible component breakdown of a feature/site. Each of the bordered areas with different colors represents a single type of component. According to this, you have the following component hierarchy:

  • FilterableProductTable

What should a component contain?

First of all it’s wise to follow the single responsibility principle and ideally, design your components to be responsible for only one thing. When you start to feel you are not doing it right anymore with your component, you should consider breaking it down into smaller ones.

Since we are talking about component hierarchy, your components will use other components as well. But let's see the code of a simple component in ES5:

var HelloComponent = React.createClass({  
    render: function() {
        return <div>Hello {this.props.name}</div>;
    }
});

But from now on, we will use ES6. ;)
Let’s check out the same component in ES6:

class HelloComponent extends React.Component {  
  render() {
    return <div>Hello {this.props.name}</div>;
  }
}

JS, JSX

As you can see, our component is a mix of JS and HTML codes. Wait, what? HTML in my JavaScript? Yes, probably you think it's strange, but the idea here is to have everything in one place. Remember, single responsibility. It makes a component extremely flexible and reusable.

In React, it's possible to write your component in pure JS like:

  render () {
    return React.createElement("div", null, "Hello ",
        this.props.name);
  }

But I think it's not very comfortable to write your HTML in this way. Luckily we can write it in a JSX syntax (JavaScript extension) which let us write HTML inline:

  render () {
    return <div>Hello {this.props.name}</div>;
  }

What is JSX?
JSX is a XML-like syntax extension to ECMAScript. JSX and HTML syntax are similar but it’s different at some point. For example the HTML class attribute is called className in JSX. For more differences and gathering deeper knowledge check out Facebook’s HTML Tags vs. React Components guide.

Because JSX is not supported in browsers by default (maybe someday) we have to compile it to JS. I'll write about how to use JSX in the Setup section later. (by the way Babel can also transpile JSX to JS).

Useful links about JSX:
- JSX in depth
- Online JSX compiler
- Babel: How to use the react transformer.

What else can we add?

Each component can have an internal state, some logic, event handlers (for example: button clicks, form input changes) and it can also have inline style. Basically everything what is needed for proper displaying.

You can see a {this.props.name} at the code snippet. It means we can pass properties to our components when we are building our component hierarchy. Like: <MyComponent name="John Doe" />
It makes the component reusable and makes it possible to pass our application state from the root component to the child components down, through the whole application, always just the necessary part of the data.

Check this simple React app snippet below:

class UserName extends React.Component {  
  render() {
    return <div>name: {this.props.name}</div>;
  }
}

class User extends React.Component {  
  render() {
    return <div>
        <h1>City: {this.props.user.city}</h1>
        <UserName name={this.props.user.name} />
      </div>;
  }
}

var user = { name: 'John', city: 'San Francisco' };  
React.render(<User user={user} />, mountNode);

Useful links for building components:
- Thinking in React

React loves ES6

ES6 is here and there is no better place for trying it out than your new shiny React project.

React wasn't born with ES6 syntax, the support came this year January, in version v0.13.0.

However the scope of this article is not to explain ES6 deeply; we will use some features from it, like classes, arrows, consts and modules. For example, we will inherit our components from the React.Component class.

Given ES6 is supported partly by browsers, we will write our code in ES6 but transpile it to ES5 later and make it work with every modern browser even without ES6 support.
To achieve this, we will use the Babel transpiler. It has a nice compact intro about the supported ES6 features, I recommend to check it out: Learn ES6

Useful links about ES6
- Babel: Learn ES6
- React ES6 announcement

Bundling with Webpack and Babel

I mentioned earlier that we will involve tools you are already familiar with and build our application from the combination of those. The first tool what might be well known is the Node.js's module system and it's package manager, npm. We will write our code in the "node style" and require everything what we need. React is available as a single npm package.
This way our component will look like this:

// would be in ES5: var React = require('react/addons');
import React from 'react/addons';

class MyComponent extends React.Component { ... }

// would be in ES5: module.exports = MyComponent;
export default MyComponent;  

We are going to use other npm packages as well.
Most npm packages make sense on the client side as well,
for example we will use debug for debugging and superagent for composing requests.

Now we have a dependency system by Node (accurately ES6) and we have a solution for almost everything by npm. What's next? We should pick our favorite libraries for our problems and bundle them up in the client as a single codebase. To achieve this, we need a solution for making them run in the browser.

This is the point where we should pick a bundler. One of the most popular solutions today are Browserify and Webpack projects. Now we are going to use Webpack, because my experience is that Webpack is more preferred by the React community. However, I'm pretty sure that you can do the same with Browserify as well.

How does it work?

Webpack bundles our code and the required packages into the output file(s) for the browser. Since we are using JSX and ES6 which we would like to transpile to ES5 JS, we have to place the JSX and ES6 to ES5 transpiler into this flow as well. Actually, Babel can do the both for us. Let's just use that!

We can do that easily because Webpack is configuration-oriented

What do we need for this? First we need to install the necessary modules (starts with npm init if you don't have the package.json file yet).

Run the following commands in your terminal (Node.js or IO.js and npm is necessary for this step):

npm install --save-dev webpack  
npm install --save-dev babel  
npm install --save-dev babel-loader  

After we created the webpack.config.js file for Webpack (It's ES5, we don't have the ES6 transpiler in the webpack configuration file):

var path = require('path');

module.exports = {  
  entry: path.resolve(__dirname, '../src/client/scripts/client.js'),
  output: {
    path: path.resolve(__dirname, '../dist'),
    filename: 'bundle.js'
  },

  module: {
    loaders: [
      {
        test: /src\/.+.js$/,
        exclude: /node_modules/,
        loader: 'babel'
      }
    ]
  }
};

If we did it right, our application starts at ./src/scripts/client/client.js and goes to the ./dist/bundle.js for the command webpack.

After that, you can just include the bundle.js script into your index.html and it should work:
<script src="bundle.js"></script>

(Hint: you can serve your site with node-static install the module with, npm install -g node-static and start with static . to serve your folder's content on the address: 127.0.0.1:8080.)

Project setup

Now we have installed and configured Webpack and Babel properly.
As in every project, we need a project structure.

Folder structure

I prefer to follow the project structure below:

config/  
    app.js
    webpack.js (js config over json -> flexible)
src/  
  app/ (the React app: runs on server and client too)
    components/
      __tests__ (Jest test folder)
      AppRoot.jsx
      Cart.jsx
      Item.jsx
    index.js (just to export app)
    app.js
  client/  (only browser: attach app to DOM)
    styles/
    scripts/
      client.js
    index.html
  server/
    index.js
    server.js
.gitignore
.jshintrc
package.json  
README.md  

The idea behind this structure is to separate the React app from the client and server code. Since our React app can run on both client and server side (=isomorphic app, we will dive deep into this in an upcoming blog post).

How to test my React app

When we are moving to a new technology, one of the most important questions should be testability. Without a good test coverage, you are playing with fire.

Ok, but which testing framework to use?
My experience is that testing a front end solution always works best with the test framework by the same creators. According to this I started to test my React apps with Jest. Jest is a test framework by Facebook and has many great features that I won't cover in this article.

I think it's more important to talk about the way of testing a React app. Luckily the single responsibility forces our components to do only one thing, so we should test only that thing. Pass the properties to our component, trigger the possible events and check the rendered output. Sounds easy, because it is.

For more practical example, I recommend checking out the Jest React.js tutorial.

Test JSX and ES6 files

To test our ES6 syntax and JSX files, we should transform them for Jest. Jest has a config variable where you can define a preprocessor (scriptPreprocessor) for that.
First we should create the preprocessor and after that pass the path to it for Jest. You can find a working example for a Babel Jest preprocessor in our repository.

Jet’s also has an example for React ES6 testing.

(The Jest config goes to the package json.)

Takeaway

In this article, we examined together why React is fast and scalable but how different its approach is. We went through how React handles the rendering and what the component-driven development is and how should you set up and organize your project. These are the very basics.

In the upcoming "The React way" articles we are going to dig deeper.

I still believe that the best way to learn a new programming approach is to start develop and write code.
That’s why I would like to ask you to write something awesome and also spend some time to check out the offical React website, especially the guides section. Excellent resource, the Facebook developers, and the React community did an awesome job with it.

Next up

If you liked this article, subscribe to our newsletter for more. The remaining part of the The React way post series are coming soon. We will cover topics like:

  • immutability
  • top-down rendering
  • Flux
  • isomorphic way (common app on client and server)

Feel free to check out the repository:
https://github.com/RisingStack/react-way-getting-started

Update: the second part is out! Learn more about the React.js way in the second part of the series: Flux Architecture with Immutable.js.

Getting Started with Koa - part 2

In the last episode of Getting Started with Koa we mastered generators and got to a point, where we can write code in synchronous fashion that runs asynchronously. This is good, because synchronous code is simple, elegant, and more reliable, while async code can lead to screaming and crying (callback hell).

This episode will cover tools that take the pain out, so we have to write only the fun parts. It will give an introduction to the basic features and mechanics of Koa.

Previously

// First part
var thunkify = require('thunkify');  
var fs = require('fs');  
var read = thunkify(fs.readFile);

// Second part
function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    console.log(err);
  }
  console.log(x);
}

// Third part
var gen = bar();  
gen.next().value(function (err, data) {  
  if (err) {
    gen.throw(err);
  }

  gen.next(data.toString());
});

This is the last example of the previous post, as you can see, we can divide it into three important parts. First we have to create our thunkified functions, that can be used in a generator. Then we have to write our generator functions using the thunkified functions. Last there is the part where we actually call and iterate through the generators, handling errors and such. If you think about it, this last part doesn't have anything to do with the essence of our program, basically it lets us run a generator. Luckily there is a module that does this for us. Meet co.

co

Co is a generator based flow-control module for node. The code below does exactly the same as the previous example, but we got rid of the generator calling code. The only thing we had to do, is to pass the generator to a function co, call it and it magically works. Well, not magically, it just handles all of the generator calling code for you, so we don't have to worry about that.

var co = require('co');  
var thunkify = require('thunkify');  
var fs = require('fs');

var read = thunkify(fs.readFile);

co(function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    console.log(err);
  }
  console.log(x);
})();

As we have already learned, you can put a yield before anything that evaluates to something. So it isn't just thunks that can be yielded. Because co wants to create an easy control flow, it yields especially at certain types. The currently supported yieldables:

  • thunks (functions)
  • array (parallel execution)
  • objects (parallel execution)
  • generators (delegation)
  • generator functions (delegation)
  • promises.

We already discussed how thunks work, so let's move on to the other ones.

Parrallel execution

var read = thunkify(fs.readFile);

co(function *() {  
  // 3 concurrent reads
  var reads = yield [read('input.txt'), read('input.txt'), read('input.txt')];
  console.log(reads);

  // 2 concurrent reads
  reads = yield { a: read('input.txt'), b: read('input.txt') };
  console.log(reads);
})();

If you yield an array or an object, it will evaluate its content parallelly. Of course this makes sense when the members of your collection are thunks, generators. You can nest, it will traverse the array or object to run all of your functions parallelly. Important: the yielded result will not be flattened, it will retain the same structure.

var read = thunkify(fs.readFile);

co(function *() {  
  var a = [read('input.txt'), read('input.txt')];
  var b = [read('input.txt'), read('input.txt')];

  // 4 concurrent reads
  var files = yield [a, b];

  console.log(files);
})();

You can also achieve parallelism by yielding after the call of a thunk.

var read = thunkify(fs.readFile);

co(function *() {  
  var a = read('input.txt');
  var b = read('input.txt');

  // 2 concurrent reads
  console.log([yield a, yield b]);

  // or

  // 2 concurrent reads
  console.log(yield [a, b]);
})();

Delegation

You can also yield generators as well of course. Notice you don't need to use yield *.

var stat = thunkify(fs.stat);

function *size (file) {  
  var s = yield stat(file);

  return s.size;
}

co(function *() {  
  var f = yield size('input.txt');

  console.log(f);
})();

We went through almost every yielding possibility you will come across using co. Here is a last example (taken from co's github page) to sum it up.

var co = require('co');  
var fs = require('fs');

function size (file) {  
  return function (fn) {
    fs.stat(file, function(err, stat) {
      if (err) return fn(err);
      fn(null, stat.size);
    });
  }
}

function *foo () {  
  var a = yield size('un.txt');
  var b = yield size('deux.txt');
  var c = yield size('trois.txt');
  return [a, b, c];
}

function *bar () {  
  var a = yield size('quatre.txt');
  var b = yield size('cinq.txt');
  var c = yield size('six.txt');
  return [a, b, c];
}

co(function *() {  
  var results = yield [foo(), bar()];
  console.log(results);
})()

I think at this point you mastered generators enough that you have a pretty good idea, how an async flow is done with these tools.
Now it's time to move on to the subject of this whole series, Koa itself!

Koa

What you need to know about koa, the module itself, is not too much. You can even look at its source and it's just 4 files, averaging around 300 lines. Koa follows the tradition that every program you write must do one thing and one thing well. So you'll see, every good koa module (and every node module should be) is short, does one thing and builds on top of other modules heavily. You should keep this in mind and hack according to this. It will benefit everybody, you and others reading your code. With that in mind let's move on to the key features of Koa.

Application

var koa = require('koa');  
var app = koa();  

Creating a Koa app is just calling the required module function. This provides you an object, which can contain an array of generators (middlewares), executed in stack-like manner upon a new request.

Cascading

An important term, when dealing with Koa, is middleware. So let's make it clear first.

**Middleware** in Koa are functions that handle requests. A server created with Koa can have a stack of middleware associated with it.

Cascading in Koa means, that the control flows through a series of middlewares. In web development this is very useful, you can make complex behaviour really simple with this. Koa implements this with generators very intuitively and cleanly. It yields downstream, then the control flows back upstream. To add a generator to a flow, call the use function with a generator. Try to guess why the code below produces A, B, C, D, E output at every incoming request!
This is a server, so the listen function does what you think, it will listen on the specified port (its arguments are the same as the pure node listen).

app.use(function *(next) {  
  console.log('A');
  yield next;
  console.log('E');
});

app.use(function *(next) {  
  console.log('B');
  yield next;
  console.log('D');
});

app.use(function *(next) {  
  console.log('C');
});

app.listen(3000);  

When a new request comes in, it starts to flow through the middlewares, in the order you wrote them. So in the example, the request starts the first middleware, it outputs A, then hits a yield next. When a middleware hits a yield next, it will go to the next middleware and continue that where it was left off. So we're moving to the next one which prints B. Then another jump to the last one, C. There is no more middleware, we downstreamed, now we're starting to step back to the previous one (just like a stack), D. Then the first one ends, E, and we are streamed upwards successfully!

At this point, the koa module itself doesn't include any other complexity - so instead of copy/pasting the documentation from the well-written Koa site, just read it there. Here are the links for these parts:

Let's see an example (also taken from the Koa site), that takes use of the HTTP features. The first middleware calculates the response time. See how easily you can achieve reaching the beginning and the end of a response, and how elegantly you can split these functionality-wise.

app.use(function *(next) {  
  var start = new Date;
  yield next;
  var ms = new Date - start;
  this.set('X-Response-Time', ms + 'ms');
});

app.use(function *(next) {  
  var start = new Date;
  yield next;
  var ms = new Date - start;
  console.log('%s %s - %s', this.method, this.url, ms);
});

app.use(function *() {  
  this.body = 'Hello World';
});

app.listen(3000);  

Wrapping up

Now that you're familiar with the core of Koa, you can say that your old web framework did all the other fancy things and you want those now! But also remember that there were a ton of features you've never used, or that some worked not the way you wanted. That's the good thing about Koa and modern node frameworks. You add the required features in the shape of small modules from npm to your app, and it does exactly what you need and in a way you need it.

This article is a guest post from Gellért Hegyi.

Getting Started with Koa, part 1 - Generators

This article is a guest post from Gellért Hegyi.

Koa is a small and simple web framework, brought to you by the team behind Express, which aims to create a modern way of developing for the web.

In this series you will understand Koa's mechanics, learn how to use it effectively in the right way to be able to write web applications with it. This first part covers some basics (generators, thunks).

Why Koa?

It has key features that allows you to write web applications easily and fast (without callbacks). It uses new language elements from ES6 to make control flow management easier in Node among others.

Koa itself is really small. This is because unlike nowadays popular web frameworks (e.g. Express), Koa follows the approach of being extremely modular, meaning every module does one thing well and nothing more. With that in mind, let's get started!

Hello Koa

var koa = require('koa');  
var app = koa();

app.use(function *() {  
  this.body = 'Hello World';
});

app.listen(3000);  

Before we get started, to run the examples and your own ES6 code with node, you need to use 0.11.9 or higher version with the --harmony flag.

As you can see from the example above, there is nothing really interesting going on in it, except that strange little * after the function keyword. Well, it makes that function a generator function.

Generators

Wouldn't it be nice, that when you execute your function, you could pause it at any point, calculate something else, do other things, then return to it, even with some value and continue?

This could be just another type of iterator (like loops). Well, that's exactly what a generator does and the best thing, it is implemented in ES6, so we are free to use it.

Let's make some generators! First, you have to create your generator function, which looks exactly like a regular function, with the exception, that you put an * symbol after the function keyword.

function *foo () { }  

Now we have a generator function. When we call this function it returns an iterator object. So unlike regular function calls, when we call a generator, the code in it doesn't start running, because as discussed earlier, we will iterate through it manually.

function *foo (arg) { } // generator function  
var bar = foo(123);      // iterator  object  

With this returned object, bar, we can iterate through the function. To start and then iterate to the next step of the generator simply call the next() method of bar. When next() is called the function starts or continues to run from where it is left off and runs until it hits a pause.

But besides continuing, it also returns an object, which gives information about the state of the generator. A property is the value property, which is the current iteration value, where we paused the generator. The other is a boolean done, which indicates when the generator finished running.

function *foo (arg) { return arg }  
var bar = foo(123);  
bar.next();          // { value: 123, done: true }  

As we can see, there isn't any pause in the example above, so it immediately returns an object where done is true. If you specify a return value in the generator, it will be returned in the last iterator object (when done is true). Now we only need to be able to pause a generator. As said it's like iterating through a function and at every iteration it yields a value (where we paused). So we pause with the yield keyword.

yield

yield [[expression]]  

Calling next() starts the generator and it runs until it hits a yield. Then it returns the object with value and done, where value has the expression value. This expression can be anything.

function* foo () {  
  var index = 0;
  while (index < 2) {
    yield index++
  }
}
var bar =  foo();

console.log(bar.next());    // { value: 0, done: false }  
console.log(bar.next());    // { value: 1, done: false }  
console.log(bar.next());    // { value: undefined, done: true }  

When we call next() again, the yielded value will be returned in the generator and it continues. It's also possible to receive a value from the iterator object in a generator (next(val)), then this will be returned in the generator when it continues.

function* foo () {  
  var val = yield 'A';
  console.log(val);           // 'B'
}
var bar =  foo();

console.log(bar.next());    // { value: 'A', done: false }  
console.log(bar.next('B')); // { value: undefined, done: true }  

Error handling

If you find something wrong in the iterator object's value, you can use its throw() method and catch the error in the generator. This makes a really nice error handling in a generator.

function *foo () {  
  try {
    x = yield 'asd B';   // Error will be thrown
  } catch (err) {
    throw err;
  }
}

var bar =  foo();  
if (bar.next().value == 'B') {  
  bar.throw(new Error("it's B!"));
}

for...of

There is a loop type in ES6, that can be used for iterating on a generator, the for...of loop. The iteration will continue until done is false. Keep in mind, that if you use this loop, you cannot pass a value in a next() call and the loop will throw away the returned value.

function *foo () {  
  yield 1;
  yield 2;
  yield 3;
}

for (v of foo()) {  
  console.log(v);
}

yield *

As said, you can yield pretty much anything, even a generator, but then you have to use yield *. This is called delegation. You're delegating to another generator, so you can iterate through multiple nested generators, with one iterator object.

function *bar () {  
  yield 'b';
}

function *foo () {  
  yield 'a'; 
  yield *bar();
  yield 'c';
}

for (v of foo()) {  
  console.log(v);
}

Thunks

Thunks are another concept that we have to wrap our head around to fully understand Koa. Primarily they are used to assist a call to another function. You can sort of associate it with lazy evaluation. What's important for us though that they can be used to move node's callbacks from the argument list, outside in a function call.

var read = function (file) {  
  return function (cb) {
    require('fs').readFile(file, cb);
  }
}

read('package.json')(function (err, str) { })  

There is a small module for this called thunkify, which transforms a regular node function to a thunk. You can question the use of that, but it turns out it can be pretty good to ditch callbacks in generators.
First we have to transform the node function we want to use in a generator to a thunk. Then use this thunk in our generator as if it returned the value, that otherwise we would access in the callback. When calling the starting next(), its value will be a function, whose parameter is the callback of the thunkified function. In the callback we can check for errors (and throw if needed), or call next() with the received data.

var thunkify = require('thunkify');  
var fs = require('fs');  
var read = thunkify(fs.readFile);

function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    throw err;
  }
  console.log(x);
}
var gen = bar();  
gen.next().value(function (err, data) {  
  if (err) gen.throw(err);
  gen.next(data.toString());
})

Take your time to understand every part of this example, because it's really important for koa to get this. If you focus on the generator part of the example, it's really cool. It has the simplicity of synchronous code, with good error handling, but still, it happens asynchronously.

To be continued...

These last examples may look cumbersome, but in the next part we will discover tools that takes these out of our code to just be left with the good parts. Also we finally will get to know Koa and its smooth mechanics, which makes web development such an ease.

Update: the second part is out: Getting Started with Koa - part 2