Trace Node.js Monitoring

KoaJS

The Evolution of Asynchronous JavaScript

asynchronous-javascript-cover

The async functions are just around the corner - but the journey to here was quite long. Not too long ago we just wrote callbacks, then the Promise/A+ specification emerged followed by generator functions and now the async functions.

Let's take a look back and see how asynchronous JavaScript evolved over the years.

Callbacks

It all started with the callbacks.

Asynchronous JavaScript

Asynchronous programming, as we know now in JavaScript, can only be achieved with functions being first-class citizens of the language: they can be passed around like any other variable to other functions. This is how callbacks were born: if you pass a function to another function (a.k.a. higher order function) as a parameter, within the function you can call it when you are finished with your job. No return values, only calling another function with the values.

Something.save(function(err) {  
  if (err)  {
    //error handling
    return;
  }
  console.log('success');
});

These so called error-first callbacks are in the heart of Node.js itself - the core modules are using it as well as most of the modules found on NPM.

The challenges with callbacks:

  • it is easy to build callback hells or spaghetti code with them if not used properly
  • error handling is easy to miss
  • can't return values with the return statement, nor can use the throw keyword

Mostly because of these points the JavaScript world started to look for solutions that can make asynchronous JavaScript development easier.

One of the answers was the async module. If you worked a lot with callbacks, you know how complicated it can get to run things in parallel, sequentially or even mapping arrays using asynchronous functions. Then the async module was born thanks to Caolan McMahon.

With async, you can easily do things like:

async.map([1, 2, 3], AsyncSquaringLibrary.square,  
  function(err, result){
  // result will be [1, 4, 9]
});

Still, it is not that easy to read nor to write - so comes the Promises.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Promises

The current JavaScript Promise specifications date back to 2012 and available from ES6 - however Promises were not invented by the JavaScript community. The term comes from Daniel P. Friedman from 1976.

A promise represents the eventual result of an asynchronous operation.

The previous example with Promises may look like this:

Something.save()  
  .then(function() {
    console.log('success');
  })
  .catch(function() {
    //error handling
  })

You can notice that of course Promises utilize callbacks as well. Both the then and the catch registers callbacks that will be invoked with either the result of the asynchronous operation or with the reason why it could not be fulfilled. Another great thing of Promises is that they can be chained:

saveSomething()  
  .then(updateOtherthing)
  .then(deleteStuff)  
  .then(logResults);

When using Promises you may have to use polyfills in runtimes that don't have it yet. A popular choice in these cases is to use bluebird. These libraries may provide a lot more functionality than the native one - even in these cases limit yourself to the features provided by Promises/A+ specifications.

For more information on Promises, refer to the Promises/A+ specification.

You may ask: how can I use Promises when most of the libraries out there exposes a callback interfaces only?

Well, it is pretty easy - the only thing that you have to do is wrapping the callback the original function call with a Promise, like this:

function saveToTheDb(value) {  
  return new Promise(function(resolve, reject) {
    db.values.insert(value, function(err, user) { // remember error first ;)
      if (err) {
        return reject(err); // don't forget to return here
      }
      resolve(user);
    })
  }
}

Some libraries/frameworks out there already support both, providing a callback and a Promise interface at the same time. If you build a library today, it is a good practice to support both. You can easily do so with something like this:

function foo(cb) {  
  if (cb) {
    return cb();
  }
  return new Promise(function (resolve, reject) {

  });
}

Or even simpler, you can choose to start with a Promise-only interface and provide backward compatibility with tools like callbackify. Callbackify basically does the same thing that the previous code snippet shows, but in a more general way.

Generators / yield

JavaScript Generators is a relatively new concept, they were introduced in ES6 (also known as ES2015).

Wouldn't it be nice, that when you execute your function, you could pause it at any point, calculate something else, do other things, and then return to it, even with some value and continue?

This is exactly what generator functions do for you. When we call a generator function it doesn't start running, we will have to iterate through it manually.

function* foo () {  
  var index = 0;
  while (index < 2) {
    yield index++;
  }
}
var bar =  foo();

console.log(bar.next());    // { value: 0, done: false }  
console.log(bar.next());    // { value: 1, done: false }  
console.log(bar.next());    // { value: undefined, done: true }  

If you want to use generators easily for writing asynchronous JavaScript, you will need co as well.

Co is a generator based control flow goodness for Node.js and the browser, using promises, letting you write non-blocking code in a nice-ish way.

With co, our previous examples may look something like this:

co(function* (){  
  yield Something.save();
}).then(function() {
  // success
})
.catch(function(err) {
  //error handling
});

You may ask: what about operations running in parallel? The answer is simpler than you may think (under the hoods it is just a Promise.all):

yield [Something.save(), Otherthing.save()];  

Async / await

Async functions were introduced in ES7 - and currently only available using a transpiler like babel. (disclaimer: now we are talking about the async keyword, not the async package)

In short, with the async keyword we can do what we are doing with the combination of co and generators - except the hacking.

denicola-yield-await-asynchronous-javascript

Under the hood async functions using Promises - this is why the async function will return with a Promise.

So if we want to do the same thing as in the previous examples, we may have to rewrite our snippet to the following:

async function save(Something) {  
  try {
    await Something.save()
  } catch (ex) {
    //error handling
  }
  console.log('success');
} 

As you can see to use an async function you have to put the async keyword before the function declaration. After that, you can use the await keyword inside your newly created async function.

Running things in parallel with async functions is pretty similar to the yield approach - except now the Promise.all is not hidden, but you have to call it:

async function save(Something) {  
  await Promise.all[Something.save(), Otherthing.save()]
} 

Koa already supports async functions, so you can try them out today using babel.

import koa from koa;  
let app = koa();

app.experimental = true;

app.use(async function (){  
  this.body = await Promise.resolve('Hello Reader!')
})

app.listen(3000);  

Further reading

Currently we are using Hapi with generators in production in most of our new projects - alongside with Koa as well.

Which one do you prefer? Why? I would love to hear your comments!

Start using GraphQL with Graffiti

Update: we've released a Mongoose adapter for Graffiti. Here's how to get started with it.

Currently, the consumption of HTTP REST APIs dominate the client-side world and GraphQL aims to change that. The transition can be time-consuming - this is where Graffiti comes into the picture.

Graffiti grabs your existing models, transforms them into a GraphQL schema and exposes it over HTTP.

Get Graffiti

Why we made Graffiti for GraphQL

We don't want to rewrite our application - no one wants that. Graffiti provides an express middleware, a hapi plugin and a koa middleware to convert your existing models into a GraphQL schema and exposes it over HTTP.

Use cases of Graffiti

There are a couple of areas where Graffiti is extremely useful:

For existing applications

If you are already running an HTTP REST API and uses an ORM then with adding a couple of lines of code you can expose a GraphQL endpoint.

For new applications

Graffiti comes to the rescue when you just about to start developing a new backend for your endpoint consumers - the only thing you have to define is your models using one of the supported ORMs.

Setup Graffiti

Adding to Graffiti to your project is as easy as:

import express from 'express';  
import graffiti from '@risingstack/graffiti';  
import {getSchema} from '@risingstack/graffiti-mongoose';  
import mongooseSchema from './schema';

const app = express();  
app.use(graffiti.express({  
  schema: getSchema(mongooseSchema)
}));

For complete, working examples check out our Graffiti examples folder.

You can play with a running Relay application using Graffiti here. Navigate to /graphql to explore the schema with GraphiQL.

You can use a GraphQL schema generated by an adapter or your own GraphQLSchema instance with Graffiti.

GraphQL over Websocket / mqtt

With Graffiti you are not limited to HTTP - with the adapters you can easily expose the GraphQL interface over any transport protocol.

Roadmap

We have a fully functional Graffiti adapter for Mongoose and we have plans to support other ORMs too. Also, please note, that some of the following items depend only on the adapters, and not on the main project itself.

  • Query support (done)
  • Mutation support (done)
  • Yeoman generator (planned)
  • Relay support (done)
  • Adapters
    • for MongoDB: graffiti-mongoose (done)
    • for RethinkDB: graffiti-thinky (in progress)
    • for SQL: graffiti-bookshelf (in progress)

Contributing

If you are interested in contributing, just say hi in the main Graffiti repository.

Getting Started with Koa - part 2

In the last episode of Getting Started with Koa we mastered generators and got to a point, where we can write code in synchronous fashion that runs asynchronously. This is good, because synchronous code is simple, elegant, and more reliable, while async code can lead to screaming and crying (callback hell).

This episode will cover tools that take the pain out, so we have to write only the fun parts. It will give an introduction to the basic features and mechanics of Koa.

Previously

// First part
var thunkify = require('thunkify');  
var fs = require('fs');  
var read = thunkify(fs.readFile);

// Second part
function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    console.log(err);
  }
  console.log(x);
}

// Third part
var gen = bar();  
gen.next().value(function (err, data) {  
  if (err) {
    gen.throw(err);
  }

  gen.next(data.toString());
});

This is the last example of the previous post, as you can see, we can divide it into three important parts. First we have to create our thunkified functions, that can be used in a generator. Then we have to write our generator functions using the thunkified functions. Last there is the part where we actually call and iterate through the generators, handling errors and such. If you think about it, this last part doesn't have anything to do with the essence of our program, basically it lets us run a generator. Luckily there is a module that does this for us. Meet co.

co

Co is a generator based flow-control module for node. The code below does exactly the same as the previous example, but we got rid of the generator calling code. The only thing we had to do, is to pass the generator to a function co, call it and it magically works. Well, not magically, it just handles all of the generator calling code for you, so we don't have to worry about that.

var co = require('co');  
var thunkify = require('thunkify');  
var fs = require('fs');

var read = thunkify(fs.readFile);

co(function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    console.log(err);
  }
  console.log(x);
})();

As we have already learned, you can put a yield before anything that evaluates to something. So it isn't just thunks that can be yielded. Because co wants to create an easy control flow, it yields especially at certain types. The currently supported yieldables:

  • thunks (functions)
  • array (parallel execution)
  • objects (parallel execution)
  • generators (delegation)
  • generator functions (delegation)
  • promises.

We already discussed how thunks work, so let's move on to the other ones.

Parrallel execution

var read = thunkify(fs.readFile);

co(function *() {  
  // 3 concurrent reads
  var reads = yield [read('input.txt'), read('input.txt'), read('input.txt')];
  console.log(reads);

  // 2 concurrent reads
  reads = yield { a: read('input.txt'), b: read('input.txt') };
  console.log(reads);
})();

If you yield an array or an object, it will evaluate its content parallelly. Of course this makes sense when the members of your collection are thunks, generators. You can nest, it will traverse the array or object to run all of your functions parallelly. Important: the yielded result will not be flattened, it will retain the same structure.

var read = thunkify(fs.readFile);

co(function *() {  
  var a = [read('input.txt'), read('input.txt')];
  var b = [read('input.txt'), read('input.txt')];

  // 4 concurrent reads
  var files = yield [a, b];

  console.log(files);
})();

You can also achieve parallelism by yielding after the call of a thunk.

var read = thunkify(fs.readFile);

co(function *() {  
  var a = read('input.txt');
  var b = read('input.txt');

  // 2 concurrent reads
  console.log([yield a, yield b]);

  // or

  // 2 concurrent reads
  console.log(yield [a, b]);
})();

Delegation

You can also yield generators as well of course. Notice you don't need to use yield *.

var stat = thunkify(fs.stat);

function *size (file) {  
  var s = yield stat(file);

  return s.size;
}

co(function *() {  
  var f = yield size('input.txt');

  console.log(f);
})();

We went through almost every yielding possibility you will come across using co. Here is a last example (taken from co's github page) to sum it up.

var co = require('co');  
var fs = require('fs');

function size (file) {  
  return function (fn) {
    fs.stat(file, function(err, stat) {
      if (err) return fn(err);
      fn(null, stat.size);
    });
  }
}

function *foo () {  
  var a = yield size('un.txt');
  var b = yield size('deux.txt');
  var c = yield size('trois.txt');
  return [a, b, c];
}

function *bar () {  
  var a = yield size('quatre.txt');
  var b = yield size('cinq.txt');
  var c = yield size('six.txt');
  return [a, b, c];
}

co(function *() {  
  var results = yield [foo(), bar()];
  console.log(results);
})()

I think at this point you mastered generators enough that you have a pretty good idea, how an async flow is done with these tools.
Now it's time to move on to the subject of this whole series, Koa itself!

Koa

What you need to know about koa, the module itself, is not too much. You can even look at its source and it's just 4 files, averaging around 300 lines. Koa follows the tradition that every program you write must do one thing and one thing well. So you'll see, every good koa module (and every node module should be) is short, does one thing and builds on top of other modules heavily. You should keep this in mind and hack according to this. It will benefit everybody, you and others reading your code. With that in mind let's move on to the key features of Koa.

Application

var koa = require('koa');  
var app = koa();  

Creating a Koa app is just calling the required module function. This provides you an object, which can contain an array of generators (middlewares), executed in stack-like manner upon a new request.

Cascading

An important term, when dealing with Koa, is middleware. So let's make it clear first.

**Middleware** in Koa are functions that handle requests. A server created with Koa can have a stack of middleware associated with it.

Cascading in Koa means, that the control flows through a series of middlewares. In web development this is very useful, you can make complex behaviour really simple with this. Koa implements this with generators very intuitively and cleanly. It yields downstream, then the control flows back upstream. To add a generator to a flow, call the use function with a generator. Try to guess why the code below produces A, B, C, D, E output at every incoming request!
This is a server, so the listen function does what you think, it will listen on the specified port (its arguments are the same as the pure node listen).

app.use(function *(next) {  
  console.log('A');
  yield next;
  console.log('E');
});

app.use(function *(next) {  
  console.log('B');
  yield next;
  console.log('D');
});

app.use(function *(next) {  
  console.log('C');
});

app.listen(3000);  

When a new request comes in, it starts to flow through the middlewares, in the order you wrote them. So in the example, the request starts the first middleware, it outputs A, then hits a yield next. When a middleware hits a yield next, it will go to the next middleware and continue that where it was left off. So we're moving to the next one which prints B. Then another jump to the last one, C. There is no more middleware, we downstreamed, now we're starting to step back to the previous one (just like a stack), D. Then the first one ends, E, and we are streamed upwards successfully!

At this point, the koa module itself doesn't include any other complexity - so instead of copy/pasting the documentation from the well-written Koa site, just read it there. Here are the links for these parts:

Let's see an example (also taken from the Koa site), that takes use of the HTTP features. The first middleware calculates the response time. See how easily you can achieve reaching the beginning and the end of a response, and how elegantly you can split these functionality-wise.

app.use(function *(next) {  
  var start = new Date;
  yield next;
  var ms = new Date - start;
  this.set('X-Response-Time', ms + 'ms');
});

app.use(function *(next) {  
  var start = new Date;
  yield next;
  var ms = new Date - start;
  console.log('%s %s - %s', this.method, this.url, ms);
});

app.use(function *() {  
  this.body = 'Hello World';
});

app.listen(3000);  

Wrapping up

Now that you're familiar with the core of Koa, you can say that your old web framework did all the other fancy things and you want those now! But also remember that there were a ton of features you've never used, or that some worked not the way you wanted. That's the good thing about Koa and modern node frameworks. You add the required features in the shape of small modules from npm to your app, and it does exactly what you need and in a way you need it.

This article is a guest post from Gellért Hegyi.

Getting Started with Koa, part 1 - Generators

This article is a guest post from Gellért Hegyi.

Koa is a small and simple web framework, brought to you by the team behind Express, which aims to create a modern way of developing for the web.

In this series you will understand Koa's mechanics, learn how to use it effectively in the right way to be able to write web applications with it. This first part covers some basics (generators, thunks).

Why Koa?

It has key features that allows you to write web applications easily and fast (without callbacks). It uses new language elements from ES6 to make control flow management easier in Node among others.

Koa itself is really small. This is because unlike nowadays popular web frameworks (e.g. Express), Koa follows the approach of being extremely modular, meaning every module does one thing well and nothing more. With that in mind, let's get started!

Hello Koa

var koa = require('koa');  
var app = koa();

app.use(function *() {  
  this.body = 'Hello World';
});

app.listen(3000);  

Before we get started, to run the examples and your own ES6 code with node, you need to use 0.11.9 or higher version with the --harmony flag.

As you can see from the example above, there is nothing really interesting going on in it, except that strange little * after the function keyword. Well, it makes that function a generator function.

Generators

Wouldn't it be nice, that when you execute your function, you could pause it at any point, calculate something else, do other things, then return to it, even with some value and continue?

This could be just another type of iterator (like loops). Well, that's exactly what a generator does and the best thing, it is implemented in ES6, so we are free to use it.

Let's make some generators! First, you have to create your generator function, which looks exactly like a regular function, with the exception, that you put an * symbol after the function keyword.

function *foo () { }  

Now we have a generator function. When we call this function it returns an iterator object. So unlike regular function calls, when we call a generator, the code in it doesn't start running, because as discussed earlier, we will iterate through it manually.

function *foo (arg) { } // generator function  
var bar = foo(123);      // iterator  object  

With this returned object, bar, we can iterate through the function. To start and then iterate to the next step of the generator simply call the next() method of bar. When next() is called the function starts or continues to run from where it is left off and runs until it hits a pause.

But besides continuing, it also returns an object, which gives information about the state of the generator. A property is the value property, which is the current iteration value, where we paused the generator. The other is a boolean done, which indicates when the generator finished running.

function *foo (arg) { return arg }  
var bar = foo(123);  
bar.next();          // { value: 123, done: true }  

As we can see, there isn't any pause in the example above, so it immediately returns an object where done is true. If you specify a return value in the generator, it will be returned in the last iterator object (when done is true). Now we only need to be able to pause a generator. As said it's like iterating through a function and at every iteration it yields a value (where we paused). So we pause with the yield keyword.

yield

yield [[expression]]  

Calling next() starts the generator and it runs until it hits a yield. Then it returns the object with value and done, where value has the expression value. This expression can be anything.

function* foo () {  
  var index = 0;
  while (index < 2) {
    yield index++
  }
}
var bar =  foo();

console.log(bar.next());    // { value: 0, done: false }  
console.log(bar.next());    // { value: 1, done: false }  
console.log(bar.next());    // { value: undefined, done: true }  

When we call next() again, the yielded value will be returned in the generator and it continues. It's also possible to receive a value from the iterator object in a generator (next(val)), then this will be returned in the generator when it continues.

function* foo () {  
  var val = yield 'A';
  console.log(val);           // 'B'
}
var bar =  foo();

console.log(bar.next());    // { value: 'A', done: false }  
console.log(bar.next('B')); // { value: undefined, done: true }  

Error handling

If you find something wrong in the iterator object's value, you can use its throw() method and catch the error in the generator. This makes a really nice error handling in a generator.

function *foo () {  
  try {
    x = yield 'asd B';   // Error will be thrown
  } catch (err) {
    throw err;
  }
}

var bar =  foo();  
if (bar.next().value == 'B') {  
  bar.throw(new Error("it's B!"));
}

for...of

There is a loop type in ES6, that can be used for iterating on a generator, the for...of loop. The iteration will continue until done is false. Keep in mind, that if you use this loop, you cannot pass a value in a next() call and the loop will throw away the returned value.

function *foo () {  
  yield 1;
  yield 2;
  yield 3;
}

for (v of foo()) {  
  console.log(v);
}

yield *

As said, you can yield pretty much anything, even a generator, but then you have to use yield *. This is called delegation. You're delegating to another generator, so you can iterate through multiple nested generators, with one iterator object.

function *bar () {  
  yield 'b';
}

function *foo () {  
  yield 'a'; 
  yield *bar();
  yield 'c';
}

for (v of foo()) {  
  console.log(v);
}

Thunks

Thunks are another concept that we have to wrap our head around to fully understand Koa. Primarily they are used to assist a call to another function. You can sort of associate it with lazy evaluation. What's important for us though that they can be used to move node's callbacks from the argument list, outside in a function call.

var read = function (file) {  
  return function (cb) {
    require('fs').readFile(file, cb);
  }
}

read('package.json')(function (err, str) { })  

There is a small module for this called thunkify, which transforms a regular node function to a thunk. You can question the use of that, but it turns out it can be pretty good to ditch callbacks in generators.
First we have to transform the node function we want to use in a generator to a thunk. Then use this thunk in our generator as if it returned the value, that otherwise we would access in the callback. When calling the starting next(), its value will be a function, whose parameter is the callback of the thunkified function. In the callback we can check for errors (and throw if needed), or call next() with the received data.

var thunkify = require('thunkify');  
var fs = require('fs');  
var read = thunkify(fs.readFile);

function *bar () {  
  try {
    var x = yield read('input.txt');
  } catch (err) {
    throw err;
  }
  console.log(x);
}
var gen = bar();  
gen.next().value(function (err, data) {  
  if (err) gen.throw(err);
  gen.next(data.toString());
})

Take your time to understand every part of this example, because it's really important for koa to get this. If you focus on the generator part of the example, it's really cool. It has the simplicity of synchronous code, with good error handling, but still, it happens asynchronously.

To be continued...

These last examples may look cumbersome, but in the next part we will discover tools that takes these out of our code to just be left with the good parts. Also we finally will get to know Koa and its smooth mechanics, which makes web development such an ease.

Update: the second part is out: Getting Started with Koa - part 2