Trace Node.js Monitoring
Need help with Node.js?
Learn more
Need Node.js support?
Learn more

nodejs

Dependency Injection in Node.js

Dependency injection is a software design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object.

Reasons for using Dependency Injection

Decoupling

Dependency injection makes your modules less coupled resulting in a more maintainable codebase.

Easier unit testing

Instead of using hardcoded dependencies you can pass them into the module you would like to use. With this pattern in most cases, you don't have to use modules like proxyquire.

Faster development

With dependency injection, after the interfaces are defined it is easy to work without any merge conflicts.

How to use Dependency Injection using Node.js

First, let's take a look at how you could write your applications without using dependency injection, and how would you transform it.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!



Sample module without dependency injection
// team.js
var User = require('./user');

function getTeam(teamId) {  
  return User.find({teamId: teamId});
}

module.exports.getTeam = getTeam;  

A simple test would look something like this:

// team.spec.js
var Team = require('./team');  
var User = require('./user');

describe('Team', function() {  
  it('#getTeam', function* () {
    var users = [{id: 1, id: 2}];

    this.sandbox.stub(User, 'find', function() {
      return Promise.resolve(users);
    });

    var team = yield team.getTeam();

    expect(team).to.eql(users);
  });
});

What we did here is that we created a file called team.js which can return a list of users who belong to a single team. For this, we require the User model, so we can call its find method that returns with a list of users.

Looks good, right? But when it comes to testing it, we have to use test stubs with sinon.

In the test file, we have to require the User model as well, so we can stub its find method. Notice, that we are using the sandbox feature here, so we do not have to manually restore the original function after the test run.

Note: stubs won't work if the original object uses Object.freeze.


Sample module with dependency injection
// team.js
function Team(options) {  
  this.options = options;
}

Team.prototype.getTeam = function(teamId) {  
  return this.options.User.find({teamId: teamId})
}

function create(options) {  
  return new Team(options);
}

You could test this file with the following test case:

// team.spec.js
var Team = require('./team');

describe('Team', function() {  
  it('#getTeam', function* () {
    var users = [{id: 1, id: 2}];

    var fakeUser = {
      find: function() {
        return Promise.resolve(users);
      }
    };

    var team = Team.create({
      User: fakeUser
    });

    var team = yield team.getTeam();

    expect(team).to.eql(users);
  });
});

Okay, so how the version with dependency injection differs from the previous one? The first thing that you can notice is the use of the factory pattern: we use this to inject options/dependencies to the newly created object - this is where we can inject the User model.

In the test file we have to create a fake model that will represent the User model then we simply inject this by passing it to the create function of the Team model. Easy, right?

Dependency Injection in Real Projects

You can find dependency injection examples in lots of open-source projects. For example, most of the Express/Koa middlewares that you use in your everyday work uses the very same approach.

Express middlewares
var express = require('express');  
var app = express();  
var session = require('express-session');

app.use(session({  
  store: require('connect-session-knex')()
}));

The code snippet above is using dependency injection with the factory pattern: to the session middleware we are passing the connect-session-knex module - it has to implement an interface, that the session module will call.

In this case the connect-session-knex module has to implement the following methods:

  • store.destroy(sid, callback)
  • store.get(sid, callback)
  • store.set(sid, session, callback)
Hapi plugins

The very same concept can be found in Hapi as well - the following example injects the handlebars module as a view engine for Hapi to use.

server.views({  
  engines: {
    html: require('handlebars')
  },
  relativeTo: __dirname,
  path: 'templates'
});
Recommended reading

Node.js Best Practices - Part 2: The next chapter of Node.js best practices, featuring pre-commit checks, JavaScript code style checker and configuration best practices.


Do you use dependency injection in your projects? If so, how? Please share your thoughts, projects or examples in the comments below.

Swagger for Node.js HTTP API Design

swagger-nodejs

Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment.

With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability.

Swagger Basics

That sounds nice, doesn't it? Let me explain it a little bit more: these kinds of API description languages help us - developers - to create a rich documentation for our services. Basically it is a way to tell the consumer (may it be a web frontend or a mobile app) how to use the API, what the endpoints are available to call, what are their exact return values. In this article, we are going to take a look how can you start using Swagger with Node.js today.

It is a contract between the backend and the front end developer, takes care of the dependency between the two sides. If the document changes, you can see that the API changed, and adapt to it quickly.

It might be a good approach to keep the document in a separate repo and open discussions about it with the consumers. This way you can make sure your that your users will be satisfied with the structure of your API. It can be a source of conflicts but also can help to handle them.

Here at RisingStack we started to use this approach on a much higher level, but the credit goes to the KrakenJS team who have done so much work creating a swaggerize-hapi that makes working with Swagger a smart pick. We adapted their way of generating the application's routing based on the Swagger document.

Having this description, we can take API design a bit further, by generating the whole routing system in our application. This way we only have to care about our business logic and don't bother with the routing or even validation.

With Swagger, no more:

  • inconsistent API description
  • arguments between developers (at least not on this)
  • springily breaking applications
  • documentation writing, but I am sure no one is going to miss that

If you read our blog you are familiar with the fact that we're using Hapi for most of our node services.

What we have to do is essentially a Hapi plugin, and plug it into our server. With JOI validation available, we not only get the plain routes, but the types are casted to the types defined in the description, and the payload is already validated. That's what I call Swagger.

But enough of the theory, lets see some examples!

The Swagger Descriptor

This methodology is called design-driven development. Firstly we design our endpoints' behavior by describing them in either a YML or a JSON file. This is the most important task and everyone in the team should take part of it.

I prefer YML over JSON, but that's really just personal preference.

This is a boilerplate Swagger document, it has a pretty readable look:

swagger: '2.0'  
info:  
  title: SAMPLE API
  version: '0.0.1'
host: 0.0.0.0  
schemes:  
  - http
  - https
basePath: '/v1'  
produces:  
  - application/json

To specify paths, we have to add additional properties to our YML document.

paths:  
  /info:
    get:
      tags:
      - info
      summary: returns basic info of the server
      responses:
        200:
          description: successful operation
        default:
            description: unexpected error
            schema:
            $ref: Error

What this snippet does is it creates an /info endpoint, that returns 200 OK if everything went well and an error if something bad happened.

But wait, what is $ref? That is Swagger's way to stay DRY. You can define the API resources in your Swagger file. Write once, use anywhere.

Using Swagger with Node.js

Let's create a User resource, users commonly require a username and password.
Upon POST-ing this resource to the server it will be validated against this very schema. That's something that enjoi does magically for you: no more validation is needed in your route handler (in the background it just creates joi schemas from JSON schemas).

definitions:  
  User:
    type: object
    required:
    - username
    - password
    properties:
      id:
        type: string
      username:
        type: string
      password:
          type: string

When creating a server, just create a Hapi plugin for your API.

var Hapi = require('hapi'),  
var swaggerize = require('swaggerize-hapi');

var server = new Hapi.Server();

server.register({  
    plugin: swaggerize,
    options: {
        api: require('./config/pets.json'),
        handlers: Path.join(__dirname, './handlers')
    }
});

Swagger for Microservices

Initially, we talked about using Swagger for defining the communication between client and server - but it can work between servers as well.

If you have multiple HTTP-based microservices it is quite easy to get lost with all their interface, but not with Swagger. You can simply build an API catalog with all your services and their exposed functionality, make it searchable, and you will never implement anything twice.

The Swagger UI

The builder automatically creates the /api-docs endpoint where the JSON description is available.

Using that, Swagger has an online viewer where users can try your API in just a couple of clicks. Here anyone can view your API definition, try those POSTs PUTs and DELETEs on the page live. Definitely check this out, it spares you the time to build docs page: Swagger-UI.

Swagger UI for Node.js

They even have a Docker image available. Plug and play with just a couple of commands you can run your own Swagger-ui.

docker build -t swagger-ui-builder .  
docker run -p 127.0.0.1:8080:8080 swagger-ui-builder  

Huge thanks to the guys working on this. Keep up the awesome work!

Further Readings

Node.js Production Checklist

Intro

Previously we talked about Node.js best practices then best practices again and how to run Node.js in production.

In this post I'd like to give you a general checklist what you should do before going to production with Node.js. Most of these points are not just applying to Node.js, but every production systems.

Disclaimer: this checklist is just scratching the surface - every production deployment is different, so make sure you understand your system, and use these tips accordingly.

Deployment

Even in bigger systems we see lots of manual steps involved in deployment. This approach is very error prone - in case someone forgets something, you will have a bad time. Never do deployment manually.

Instead you can use tools like Codeship or Shippable if you are looking for hosted solutions, or Jenkins if you are going to set up a more complex pipeline.

Speaking of deployment: you may want to take a look at immutable infrastructures as well, and what challenges they can solve for you.

Security

Security is the elephant in the room - but it shouldn't be. Let's take a look at some of the possible security issues, and how you can fix them.

NPM Shrinkwrap

npm shrinkwrap  

This command locks down the versions of a package's dependencies so that you can control exactly which versions of each dependency will be used when your package is installed.

Uhm, ok, but why do you need this? Imagine the following scenario - during development everything works as expected, all your tests pass, but your production environment is broken. One of the reasons can be, that a new version of the package you are using was released which contained breaking changes.

This is why you should use SemVer as well - but still, we make mistakes, so we better prepare for them. Before pushing your changes to production, use npm shrinkwrap.

Node Security Project CLI

Once you have your npm-shrinkwrap.json, you can check if they have known vulnerabilities.

For this you have to install nsp using npm i nsp -g.

After that just use nsp audit-shrinkwrap, and hopefully you will get No vulnerable modules found as a result. If not, you should update your dependencies.

For more on Node.js Security you should watch Adam Baldwin's talk and read our blogpost dealing with Node.js Security.

Use VPNs and Private Networks

Private networks are networks that are only available to a restricted set of users and servers.

A VPN, or virtual private network, is a way to create secure connections between remote computers and present the connection as if it were a local private network. This provides a way to configure your services as if they were on a private network and connect remote servers over secure connections. - Digital Ocean

But why? Using VPNs you can create a private network that only your servers can see. With this solution your communication will be private and secure. You only have to expose the interfaces that your clients actually need - no need to open up a port for Redis or Postgres.

Logging

In short: log everything, all the time - but not just in your Node.js application, but on the operating system level as well. For this one of the most popular solutions is to use Logstash.

But why do you need logging? Just a couple of the use cases: (sure, sometimes they overlap)

  • find problems in applications running in production
  • find security holes
  • oversee your infrastructure

Speaking of Node, you can use either Winston or Bunyan to do logging.

You can check out related blogposts by Pinterest and Cloudgear as well.

Monitoring & Alerting

Monitoring is crucial - but not just in mission critical systems. If something bad happens, like your landing page is not showing up, you want to be notified about it.

There are tons of tools out there coming to the rescue like Zabbix, New Relic, Monit, PagerDuty, etc. The important thing here is to pick what suits you the best, and use that - just make sure you have it set up. Do not have illusions that your system won't fail - I promise you, it will and you are going to have a bad time.

For a more detailed talk on monitoring I strongly suggest you to watch the following video on monitoring the Obama campaign.

Caching Node.js Production Applications

Cache (almost) everything - by caching I don't only mean the classical HTTP caching, but on the database level as well.

The whys:

  • smaller load on your servers -> cost-effective infrastructure
  • faster responses to the clients -> happy users

Speaking of HTTP REST APIs it is really easy to implement caching, but one thing you should keep in mind: GET endpoints can be cached, but PUT, POST, & DELETE endpoints cannot.

For a reference implementation I would suggest you to read API Caching 101 by Fastly.

Outro

This checklist applies to most systems, not just to the ones implemented in Node.js. As this list just scratching the surface, I would like to ask you: what would you add to the list? All comments/feedbacks are very welcomed!

Recommended reading

A step-by-step guide on how to set up your own Node.js production environment.

Why You Should Start Using Microservices

This post aims to give you a better understanding of microservices, what they are, what are the benefits and challenges of using this pattern and how you can start building them using Node.js.

Before diving into the world of microservices let us take a look at monoliths to better understand the motivation behind microservices.

The Monolithic Way

Monoliths are built as a single unit, so they are responsible for every possible functionality: handling HTTP requests, executing domain logic, database operations, communication with the browser/client, handling authentication and so on.

Because of this, even the smallest changes in the system involves building and deploying the whole application.

Building and deploying is not the only problem - just think about scaling. You have to run multiple instances of the monolith, even if you know that the bottleneck lays in one component only.

Take the following simplified example:

monolithic application

What happens when suddenly your users starts uploading lots of images? Your whole application will suffer performance issues. You have two options here - either scale the application by running multiple instances of the monolith or move the logic into a microservice.

The Microservices Way

An approach to developing a single application as a suite of small services. - Martin Fowler

The microservice pattern is not new. The term microservice was discussed at a workshop of software architects near Venice in May of 2011 to describe what the participants saw as a common architectural style that many of them had been recently exploring.

The previous monolith could be transformed using the microservies pattern into the following:

microservices

Advantages of Microservices

Evolutionary Design

One of the biggest advantages of the microservices pattern is that it does not require you to rewrite your whole application from the ground up. Instead what you can do is to add new features as microservices, and plug them into your existing application.

Small Codebase

Each microservice deals with one concern only - this result in a small codebase, which means easier maintainability.

Easy to Scale

Back to the previous example - What happens when suddenly your users starts uploading lots of images?.

In this case you have the freedom to only scale the Image API, as that service will handle the bigger load. Easy, right?

Easy to Deploy

Most microservices have only a couple of dependencies so they should be easy to deploy.

System Resilience

As your application is composed from multiple microservices, if some of them goes down only some features from your application will go down, not the entire application.

New Challenges

The microservice pattern is not the silver bullet for designing systems - it helps a lot, but also comes with new challenges.

Communication Between Microservices

One of the challenges is communication - our microservices will rely on each other and they have to communicate. Let's take a look at the most common options!

Using HTTP APIs

Microservices can expose HTTP endpoints, so other services can use their services.

But why HTTP? HTTP is the de facto, standard way of information exchange - every language has some kind of HTTP client (yes, you can write your microservices using different languages). We also have the toolset to scale it, no need to reinvent the wheel. Have I mentioned, that it is stateless as well?

Using Messaging Queues

Another way for microservices to comminucate with each other is to use messaging queues like RabbitMQ or ZeroMQ. This way of communication is extremely useful when talking about long-running worker tasks or mass processing. A good example of this is the Email API - when an email has to be sent out it will be put into a queue, and the Email API will process it and send it out.

Service Discovery

Speaking of communication: our microservices need to know how they can find each other so they can talk. For this we need a system that is consistent, highly available and distributed. Take the Image API as an example: the main application has to know where it can find the required service, so it has to acquire its address.

Useful libraries/tools/frameworks

Here you can find a list of projects that we frequently use at RisingStack to build microservice-based infrastructures. In the upcoming blogposts you will get a better picture on how you can fit them into your stack.

For HTTP APIs:

For messaging:

For service discovery:

Next up

This was the first post in a series dealing with microservices. In the next one we will discover how you can implement service discovery for Node applications.

Are you planning to introduce microservices into your organization? Look no further, we are happy to help! Check out the RisingStack webpage to get a better picture of our services.

NodeSummit Retrospective

NodeSummit has taken place on 10-11 February in San Francisco. Most of the major figures from the Node community were there including Isaac Schlueter, Eran Hammer, Charlie Robbins, Bert Belder, Raquel Vélez, and a lot more. Also, Peter from RisingStack gave a talk on Isomorphic Javascript as well.

Day 1 Agenda

Day 1

nodesummit welcome NodeSummit Welcome

NodeSummit took place at the Mission Bay Conference Center on 10-11 February. The conference featured some big adopters of Node.js like Netflix, MasterCard, Intuit, PayPal, Yahoo or Dow Jones.

node.js application design Designing Node.js Applications Panel with Dan Shaw, Jeff Harrell, Aarthi Jayaram, William Kapke, Luca Maraschi and Siddharth Ram

The format went like this: there was a main stage, where the panel talks were held and three smaller rooms. Sadly, these rooms were so small, it was really hard to get into those talks.

Most of the talks targeted beginner / intermediate levels, so if you were using Node.js for years, not a lot of new things came up.

At the end of the first day Joyent announced the Node Foundation. Joyent will join forces with IBM, PayPal, Microsoft Corp, Fidelity and The Linux Foundation to establish the Node.js Foundation, which will be committed to the continued growth and evolution of Node.js, while maintaining a collaborative environment to benefit all users.

After Party

The After Party took place at New Relic's HQ with a pretty amazing view:

new relic office New Relic View

nodesummit after party new relic NodeSummit After Party at New Relic

The After Party was very great - talking with new people, meeting old friends.

Fun fact: guess what was the Wifi password at New Relic: RubyOnRails - come on guys, it was NodeSummit :)

Day 2

Day 2 promised to be interesting as well - and mostly lived up to the expectation with talks on IoT, future of JavaScript and Node.js too.

NodeSummit IoT panel The IoT Discussion

After the talks there was another After Party, now at the venue, sponsored by Rally Ventures. Again, it was a great opportunity to talk with all the awesome people who attended the event.

Sum up

It was a great event if you were looking to meet all the major players who started to adopt or have already adopted Node.js in the enterprise. The talks were mostly aiming other enterprises, signalling them that Node.js is Enterprise Ready.

All the talks were recorded and will be published in two weeks - stay tuned, and follow NodeSummit in the meantime to get updated!

The Best Talks - an Opinionated List

Shipping Node.js Applications with Docker and Codeship

Setting up continuous deployment of Node.js applications now is easier than ever. We have tools like Jenkins, Strider, Travis or Codeship. In this article we are going to use Codeship with Docker and Ansible to deploy our Node.js application.

A key principle I want to emphasize before diving deeper is immutable infrastructures, what they are and how they can make your life easier.

Immutable Infrastructures

Immutable infrastructures usually consist of data and everything else. The everything else part is replaced on each deploy. Not even security patches, or configuration changes happen on production systems. To achieve this we can choose between two approaches: the machine-based and the container-based approaches.

Machine-based

Machine-based immutability can happen like this: on each deploy you set up entirely new EC2 machines and deploy your applications on them. If everything is okay, then you simply modify your load balancer configuration to point to your new machines. Later on you can delete the old machines.

Container-based

You can think of the container-based approach as an improvement of the machine-based one: on one machine you can have multiple containers running. Docker makes this relatively easy. Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.

Sure, you could use VMWare or VirtualBox for the container-based way, but while a Docker start takes seconds, the others take minutes.

Advantages of Immutable Infrastructures

In order to full take advantage of this approach, you should have a Continuous Delivery pipeline set up, with tests and orchestration as well.

The main advantages:

  • Going back to older versions is easy
  • Testing the new infrastructure in isolation is possible
  • Simplify change management as servers never rot

Get started

It is time to get our hands dirty! We are going to create and deploy a Hello Docker & Codeship application.

For this, we are going to use https://github.com/RisingStack/docker-codeship-project. It is a simple application that returns the "We <3 Docker & Codeship" string via HTTP.

Here is what we are going to do:

  • When someone pushes to the master branch, GitHub will trigger a build on Codeship
  • If everything is OK, Codeship triggers a build on Docker Hub
  • After the new Docker image is ready (pushed), Docker triggers a webhook
  • Ansible pulls the latest image to the application servers (Docker Deployer)

Docker with Ansible and Codeship

Create a Docker Hub account

What is Docker Hub?

Docker Hub manages the lifecycle of distributed apps with cloud services for building and sharing containers and automating workflows.

Go to Docker Hub and sign up.

Setting up a Docker repository

After signing up, and adding your GitHub account, go under My Profile > My Repositories > Add repositories and click Automated build.

After setting up your repository, enable Build triggers. This will result in a command similar to this:

$ curl --data "build=true" -X POST https://registry.hub.docker.com/u/gergelyke/docker-codeship-project/trigger/TOKEN/

Also make sure, that you deactivate the GitHub commit hook under Automated build - remember, CodeShip will listen on commits to the git repository.

That's it, your Docker Hub is ready to be used by Codeship.

Get a Codeship account

Go to Codeship, and get one.

Set up your repository on Codeship

You can connect to your GitHub/BitBucket account from Codeship. After you have given access to Codeship, you will see you repositories listed. Here I chose the repository mentioned before. Then choose Node.js and click "Save and go to my dashboard".

Modify your Deploy Commands

Under the deploy settings, choose custom script - insert the previously generated curl command from Docker Hub. That's it :).

The Docker Deployer

This part does not come out of the box. You have to implement a little API server, that listens to the Docker Hub webhook. When the endpoint is called, it runs Ansible, that pulls the latest Docker image available to the application servers.

Note: of course, you are not limited to use Ansible - any other deploy/orchestration tool will do the job.

Always keep shipping

As you can see, setting up a Continuous Delivery pipeline with immutable infrastructure can be achieved easily - not only can it be used in your production environments, but staging or development environments as well.


Note: This post was picked up and republished by Codeship. You can read more about how to ship applications with Docker and Codeship on their blog.

IO.js Overview

Just released Trace: a visualised monitoring tool designed for microservices. Request a beta key here.

Version 1.0.0 of io.js was released today. This post is going to give you an overview of what io.js is, what the differences and benefits are and what the aim of the project is.

io.js - a node.js fork

The beginning - how it started

First of all, io.js is a fork of Node.js, and was forked by Fedor Indutny. With that said, Fedor is not the leader of the project, io.js is incorporated as an open governance structure. The key people included in the fork are:

But why did this fork happen?

In July, 2014 they started working with Joyent to ensure that the contributors and the community has the ability to help fix the problems that Node.js faces / will face.

Then in August Node Forward was started to help improve Node.js:

A broad community effort to improve Node, JavaScript, and their ecosystem through open collaboration.

Due to trademark restrictions, the guys could not do a release - but luckily for the community, all those efforts are incorporated into io.js.

After this, Fedor decided to fork Node.js under the name io.js.

The main differences

As you could have already noticed, io.js introduces proper semver, starting with 1.0.0. Also, io.js comes with nightly builds too.

But what's really great about this release is the updated V8 engine (from version 3.14.5.9 in Node.js v0.10.35 and 3.26.33 in Node.js v0.11.14 to 3.31.74.1 for io.js v1.0.0), which brings us ES6 features, without the --harmony flag - at least those that don't require a flag in V8 neither.

What about the staging/in-progress features?

All the new features that are considered staging/in-progress by the V8 team are available under the flags starting with --harmony. These are not meant for production systems.

Changes in the core modules

io.js not only brings us ES6, but also new (experimental) core modules and new features/fixes to the existing ones as well.

Available ES6 features

The following list of features are available without using any flags:

  • Block scoping (let, const)
  • Collections (Map, WeakMap, Set, WeakSet)
  • Generators
  • Binary and Octal literals
  • Promises
  • New String methods
  • Symbols
  • Template strings

You can always check which version of V8 is used by your installed io.js simply, with:

iojs -p process.versions.v8  

With this information you can check the available features. Also, you can check this ES6 compat-table as well.

New modules

io.js ships with new core modules as well, that can be used without installing from NPM. These are:

  • smalloc: a new core module for doing (external) raw memory allocation/deallocation/copying in JavaScript
  • v8: core module for interfacing directly with the V8 engine

For the complete API Reference, check: https://iojs.org/api/

For the complete changelog, check: https://github.com/iojs/io.js/blob/v1.x/CHANGELOG.md

Get started

To get started with io.js, visit iojs.org and download the installer for your system.

After installing it, you can simply start your application the very same way you did with Node.js:

iojs app.js  

If you are used to nvm, then we have good news for you: an io.js compatible version is coming soon!

I would encourage you to test your modules with io.js, and report to https://github.com/iojs/io.js if you find something unexpected.

What's next?

On the longer run io.js and Node.js will be merged back together - at least that's the plan. We hope that the project accomplishes its goals, and help the JavaScript community move forward.

Node.js is Enterprise-Ready

We get asked "Should we start using Node.js?" a lot. When people ask this they usually mean if the technology is production-ready, is it easy to get started with, how great is the community or what are the benefits of choosing Node.js over other technologies.

In this post I am going to give you an overview of the current state of Node.js, what are the benefits of using it, going to take a look at NPM and the open source community and showcase case studies.

Current State of Node.js

Node.js is maintained and developed by Joyent, where Ryan Dahl started working on it in 2009. 6 years, 10.000+ commits and 500+ contributors later Node.js is becoming the go-to technology for the enterprise as well, including companies like Walmart, PayPal, Uber or Groupon.

NPM, the package manager for JavaScript has more than 115.000 open source modules (and growing fast), that can be used in your projects without reinventing the wheel. Yes, NPM has more modules than Maven, the package manager of Java.

Node.js also has a great community as both individuals and big companies are actively contributing to open source projects like Browserify or Hapi.

Benefits of Using Node.js

Productivity

When PayPal started using Node.js they reported an 2x increase in productivity compared to the previous Java stack. How is that even possible?

First of all - as I already mentioned - NPM has an incredible amount of modules that can be used instantly. This saves a lot of development effort on your part.

Secondly, as Node.js applications are written using JavaScript, front-end developers can also easily understand what's going on, and make changes as necessary. This saves you valuable time again as developers will use the same language on the entire stack.

Performance

Black Friday: 1.5 billion dollars were spent online in the US on a single day. It is crucial that your site can keep up with the traffic - Walmart, one of the biggest retailers is using Node.js to serve 500 million page views on Black Friday, without a hitch.

The same applies to PayPal - they served Black Friday without problems with the help of Node.js.

PayPal also stated their performance gains when migrated to Node.js:

35% decrease in the average response time for the same page. This resulted in the pages being served 200ms faster— something users will definitely notice.

Happy Users

As your velocity increases because of the productivity gains, you can ship features/products sooner. Products, that will run faster, resulting in better user experience.

Kissmetric's study showed that 40% of people abandon a website that takes more than 3 seconds to load, and 47% of consumers expect a web page to load in 2 seconds or less. Every product manager should take this matter seriously.

Happy Developers

Finding top talent in 2015 will be harder than ever - the possibility to use cutting edge technologies on a daily basis can help find and retain the best developers.

Where Can You Use Node.js?

JavaScript is everywhere. With JavaScript you

and a lot more.

How RisingStack Can Help

Consider moving your stack to Node.js or start a new project soon? We help you make the best decisions so your business can prosper like never before. Interested in talking to us? Ping us!

Functional Reactive Programming with the Power of Node.js Streams

The goal of this article is not to go into the very details of Functional Reactive Programming. It's more about getting you interested in Node.js streams and the concept of functional reactive programming. Please feel free to share your opinion below.

Intro

Before we get started, I would like to tell you a bit about my relation to Functional Reactive Programming (FRP). I really like the concept and I use it whenever I can without sacrificing the features of the language. I will mostly talk about JavaScript and Node.js.
What I mean: I'm not going to compile to JS from another language to be perfectly functional, I'm not going to force immutability except when it provides reasonable performance as omniscient at rendering. I can also accept that proper tail recursion will arrive only in ES6.

I'm not stating that it would not be good to have immutability for example. I'm just saying that I don't want to have a magic code base with full of hacks, which is both hard to read and understand.

RP, FRP

You may have heard of functional reactive programming. Here’s the gist: FRP uses functional utilities like map, filter, and reduce to create and process data flows which propagate changes through the system: hence, reactive. When input x changes, output y updates automatically in response. - The Two Pillars of JavaScript — Pt 2: Functional Programming

So FRP stands for the Functional Reactive Programming, which is a type of Reactive Programming. I'm not here to make a religious question from this and will use the word FRP in this article. Please don't be too hard with me in the comments ;)

Why FRP is good for me?

Imagine the following scenario:

  1. the user clicks a button
  2. it triggers an Ajax call (can be fired only once per every 500ms)
  3. and shows the results on the UI.

How would you implement this in the classical way?
Probably you would create a click handler that will trigger the ajax request which will call the UI render.

I mean something like this:

$('#cats-btn').click(function () {  
  if(timeDiff < 500) {  return; }
  getDataFromServer('cats');
  // save time
});
function getDataFromServer(type) {  
  $.ajax(URL + type).done(function (cats) {
    renderUI(cats.map(formatCats));
  });
}
function formatCats(cat) {  
  return { name: 'Hello ' + cat.name }
}
function renderUI(data) {  
  UI.render(data);
}

What is the conceptual problem with this solution?

The code doesn't describe what it does. You have a simple user flow: -1-> click btn -2-> get data -3-> show on ui, but it is hidden and hard coded.

Wouldn't be awesome to have something like the following, a more descriptive code?

_('click', $('#cats-btn'))  
  .throttle(500)    // can be fired once in every 500ms 
  .pipe(getDataFromServer)
  .map(formatCats)
  .pipe(UI.render);

As you can see, the flow of your business logic is highlighted, you can imagine how useful it can be when you have more complex problems and have to deal with different async flows.

Reactive Programming raises the level of abstraction of your code so you can focus on the interdependence of events that define the business logic, rather than having to constantly fiddle with a large amount of implementation details. Code in RP will likely be more concise. - staltz

Are we talking about promises? Not exactly. Promise is a tool, FRP is a concept.

What about Node streams?

Ok. Until this point this article is yet another FRP article. Let's talk about Node ;)

We have great FRP libraries out there like RxJS and Bacon.js (by the way Bacon has the most hipster name and logo in the universe) which provide lots of great functionality to help being reactive. BUT...

...everytime when I read/hear about FRP and event streams, the first thing comes to my mind is that Node has this beautiful stream interface. But most of the popular FRP libraries just do not leverage it. They implemented their own stream-like API.
They are providing some compatibility with Node streams like: Rx.Node.fromStream(), Bacon.fromBinder() but they are not fully compatible with it. This makes me sad.
Node.js is already on the client side with browserify and webpack, npm is full of great stream libaries and we cannot use them out of the box.

I was wondering why they don't use it but I didn't find anything useful. Please comment if you have something in your mind about this.

But can't we, really? Come on, it's Node land. Of course someone has already done it, it's called Highland.js:

...using nothing more than standard JavaScript and Node-like Streams

Highland is created and maintained by @caolan, you know the guy who created async too.

Dominic Tarr also implemented the event-stream to make our life easier with streams, but it has less features compared to Highland.js, so let's continue with that.

Playing with Highland and node streams

Prerequisites: we are on the client side using a browser and our code is bundled by webpack.
You can find the full runnable code on GitHub.

// from node
var util = require('util');  
var stream = require('stream');  
// from npm
var _ = require('highland');  
var websocket = require('websocket-stream');

var catWS = websocket('ws://localhost:3000');  

Then we create a native Node.js writable stream to write to the console, but it could have been a jQuery append or anything else.

var toConsole = new stream.Writable({  
  objectMode: true 
});
toConsole._write = function (data, encoding, done) {  
  console.log(data);
  done();
};

Then we create our filter function for .filter()

function underThree (cat) {  
  return cat.age < 3;
}

The main application: easy to understand what it does, right?

_(catWS)  
  .map(JSON.parse)
  .sequence()
  .filter(underThree)
  .map(util.format)
  .pipe(toConsole);

I think this is a good example how easily you can describe with code what your application does.
This is a simple example with a one way flow, you can handle much more complex async problems with the merge, ratelimit, paralell methods.

For more functionality, visit the Highland.js documentation.

Streams for the web

Proper streams are coming to the browser and Domenic Denicola already gave a talk on it: Streams for the Web. I can just hope that it will arrive soon and will be fully compatible with Node.js's streams. It would be awesome.

Useful links / readings

Update: If we want to be accurate, Highland.js, Rx and Bacon.js aren't FRP:

I think an accurate description of Rx and Bacon.js is "compositional event systems inspired by FRP" - Conal Elliot

Node.js Best Practices - Part 2

You may remember our previous post on Node.js best practices. In this article we will continue with more best practices that can help you become a better Node.js developer.

Consistent Style

When developing JavaScript applications in a bigger team, it is important to create a style guide that everyone accepts and adapts to. If you are looking for inspiration, I would recommend checking out the RisingStack Node.js Style Guide.

But this is just the first step - after you set a standard, all of your team members have to write code using that style guide. This is where JSCS comes into the picture.

JSCS is a code style checker for JavaScript. Adding JSCS to your project is a piece of cake:

npm install jscs --save-dev  

The very next step you have to make is to enable it from the package.json file by adding a custom script:

scripts: {  
    "jscs": "jscs index.js"
}

Of course, you can add multiple files/directories to check. But why we have just created the custom script inside the package.json file? We installed jscs as a local dependency only, so we can have multiple versions on the same system. This will work because NPM will put node_modules/.bin on the PATH while executing.

You can set your validation rules in the .jscsrc file, or use a preset. You can find the available presets here, and can use them with --preset=[PRESET_NAME].

Enforce JSHint / JSCS Rules

Your build pipeline should contain JSHint and JSCS as well, but it may be a good idea to run pre-commit checks on the developers' computers as well.

To do this easily you can use the pre-commit NPM package:

npm install --save-dev pre-commit  

and configure it in your package.json file:

pre-commit": [  
    "jshint",
    "jscs"
],

Note, that pre-commit will look up what to run in your package.json's script section. By enabling this, these checks will run before every commit.

JS over JSON for configuration

We see that a lot of project uses JSON files as configuration sources. While this may be a widespread approach, JS files provide more flexibility. For this purpose we encourage you to use a config.js file:

Use NODE_PATH

Have you ever encountered something like the following?

When you end up with a quite complex project structure, requiring modules may get messy. To solve this problem you have two options:

  • symlinking your modules into the node_modules folder
  • use NODE_PATH

At RisingStack we use the NODE_PATH way, as symlinking everything to the node_modules folder takes extra effort, and may not work for various operating systems.

Setting up NODE_PATH

Imagine the following project structure:

Node.js project structure for NODE_PATH

Instead of using relative paths, we can use NODE_PATH which will point to the lib folder. In our package.json's start script we can set it and run the application with npm start.

Dependency Injection

Dependency injection is a software design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object.

Dependency injection is really helpful when it comes to testing. You can easily mock your modules' dependencies using this pattern.

In the example above we have two different dbs. In the index.js file we have the "real" db module, while in the second we simply create a fake one. This way we made it really easy to inject fake dependencies into the modules we want to test.

Need a helping hand in developing your application?

RisingStack provides JavaScript development and consulting services - ping us if you need a helping hand!