Node Hero - How to Deploy Node.js with Heroku or Docker

This article is the 12th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this Node.js deployment tutorial, you are going to learn how to deploy Node.js applications to either a PaaS provider (Heroku) or with using Docker.

Upcoming and past chapters:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. How to Deploy Node.js Applications [you are reading it now]
  13. Monitoring and operating Node.js applications

Deploy Node.js to a PaaS

Platform-as-a-Service providers can be a great fit for teams who want to do zero operations or create small applications.

In this part of the tutorial, you are going to learn how to use Heroku to deploy your Node.js applications with ease.

"Heroku can be a great fit for teams who want to do zero ops or create small apps" via @RisingStack #nodejs #heroku

Click To Tweet

Prerequisites for Heroku

To deploy to Heroku, we have to push code to a remote git repository. To achieve this, add your public key to Heroku. After registration, head over to your account and save it there (alternatively, you can do it with the CLI).

We will also need to download and install the Heroku toolbelt. To verify that your installation was successful, run the following command in your terminal:

heroku --version  
heroku-toolbelt/3.40.11 (x86_64-darwin10.8.0) ruby/1.9.3  

Once the toolbelt is up and running, log in to use it:

heroku login  
Enter your Heroku credentials.  
Email: [email protected]  
Password:  

(For more information on the toolkit, head over to the Heroku Devcenter)

Deploying to Heroku

Create a new app on Heroku to deploy Node.js

Click Create New App, add a new and select a region. In a matter of seconds, your application will be ready, and the following screen will welcome you:

Heroku Welcome Screen

Go to the Settings page of the application, and grab the Git URL. In your terminal, add the Heroku remote url:

git remote add heroku HEROKU_URL  

You are ready to deploy your first application to Heroku - it is really just a git push away:

git push heroku master  

Once you do this, Heroku starts building your application and deploys it as well. After the deployment, your service will be reachable at https://YOUR-APP-NAME.herokuapp.com.

Heroku Add-ons

One of the most valuable part of Heroku is its ecosystem since there are dozens of partners providing databases, monitoring tools, and other solutions.

To try out an add-on, install Trace, our Node.js monitoring solution. To do so, look for Add-ons on your application's page, and start typing Trace, then click on it to provision. Easy, right?

Heroku addons

(To finish the Trace integration, follow our Heroku guide.)


Deploy Node.js using Docker

In the past years Docker gained a massive momentum and became the go-to containerization software.

"Docker for Node.js is a great choice if you want more control and save on costs" via @RisingStack #nodejs #docker

Click To Tweet

In this part of the tutorial, you are going to learn how to create images from your Node.js applications and run them.

Docker Basics

To get started with Docker, download and install it from the Docker website.

Putting a Node.js application inside Docker

First, we have to get two definitions right:

  • Dockerfile: you can think of the Dockerfile as a receipt - it includes instructions on how to create a Docker image
  • Docker image: the output of the Dockerfile run - this is the runnable unit

In order to run an application inside Docker, we have to write the Dockerfile first.

Dockerfile for Node.js

In the root folder of your project, create a Dockerfile, an empty text file, then paste the following code into it:

FROM risingstack/alpine:3.3-v4.2.6-1.1.3

COPY package.json package.json  
RUN npm install

# Add your source files
COPY . .  
CMD ["npm","start"]  

Things to notice here:

  • FROM: describes the base image used to create a new image - in this case it is from the public Docker Hub
  • COPY: this command copies the package.json file to the Docker image so that we can run npm install inside
  • RUN: this runs commands, in this case npm install
  • COPY again - note, that we have done the copies in two separate steps. The reason is, that Docker creates layers from the command results, so if our package.json is not changing, it won't do npm install again
  • CMD: a Docker image can only have one CMD - this defines what process should be started with the image

Once you have the Dockerfile, you can create an image from it using:

docker build .  

Using private NPM modules? Check out our tutorial on how to install private NPM modules in Docker!

After the successful build of your image, you can list them with:

docker images  

To run an image:

docker run IMAGE_ID  

Congratulations! You have just run a Dockerized Node.js application locally. Time to deploy it!

Deploying Docker Images

One of the great things about Docker is that once you have a build image, you can run it everywhere - most environments will just simply docker pull your image, and run it.

Some providers that you can try:

  • AWS BeanStalk
  • Heroku Docker Support
  • Docker Cloud
  • Kubernetes on Google Cloud - (I highly recommend to read our article on moving to Kubernetes from our PaaS provider)

Setting them up is very straightforward - if you run into any problems, feel free to ask in the comments section!

Next up

In the next chapter of Node Hero, you are going to learn how to monitor and operate your Node.js application - so that it can be online 24/7.

If you have any questions or recommendations for this topic, write them in the comments section.

Moving a Node.js app from PaaS to Kubernetes Tutorial

From this Kubernetes tutorial, you can learn how to move a Node.js app from a PaaS provider while achieving lower response times, improving security and reducing costs.


Before we jump into the story of why and how we migrated our services to Kubernetes, it's important to mention that there is nothing wrong with using a PaaS. PaaS is perfect to start building a new product, and it can also turn out to be a good solution as an application advances - it always depends on your requirements and resources.

PaaS

Trace by RisingStack, our Node.js monitoring solution was running on one of the biggest PaaS providers for more than half a year. We have chosen a PaaS over other solutions because we wanted to focus more on the product instead of the infrastructure.

Our requirements were simple; we wanted to have:

  • fast deploys,
  • simple scaling,
  • zero-downtime deployments,
  • rollback capabilities,
  • environment variable management,
  • various Node.js versions,
  • and "zero" DevOps.

What we didn't want to have, but got as a side effect of using PaaS:

  • big network latencies between services,
  • lack of VPC,
  • response time peaks because of the multitenancy,
  • larger bills (pay for every single process, no matter how small it is: clock, internal API, etc.).

Trace is developed as a group of micro-services, you can imagine how quickly the network latency and billing started to hurt us.

Kubernetes tutorial

From our PaaS experience, we knew that we are looking for a solution that needs very few DevOps effort but provides a similar flow for our developers. We didn't want to lose any of the advantages I mentioned above - however, we wanted to fix the outstanding issues.

We were looking for an infrastructure that is more configuration-based, and anyone from the team can modify it.

Kubernetes with its’ configuration-focused, container-based and micro-services friendly nature convinced us.

"Kubernetes convinced us with its configuration-focused, microservices friendly nature" via @RisingStack #kubernetes

Click To Tweet

Let me show you what I mean under these "buzzwords" through the upcoming sections.

What is Kubernetes?

Kubernetes is an open-source system for automating deployments, scaling, and management of containerized applications - kubernetes.io

I don't want to give a very deep intro about the Kubernetes elements here, but you need to know the basic ones for the upcoming parts of this post.

My definitions won't be 100% correct, but you can think of it as a PaaS to Kubernetes dictionary:

  • pod: your running containerized application with environment variables, disk, etc. together, pods born and die quickly, like at deploys,

    • in PaaS: ~currently running app
  • deployment: configuration of your application that describes what state do you need (CPU, memory, env. vars, docker image version, disks, number of running instances, deploy strategy, etc.):

    • in PaaS: ~app settings
  • secret: you can separate your credentials from environment variables,

    • in PaaS: not exist, like a shared separated secret environment variable, for DB credentials etc.
  • service: exposes your running pods by label(s) to other apps or to the outside world on the desired IP and port

    • in PaaS: built-in non-configurable load balancer

How to set up a running Kubernetes cluster?

You have several options here. The easiest one is to create a Container Engine in Google Cloud, which is a hosted Kubernetes. It's also well integrated with other Google Cloud components, like load balancers and disks.

You should also know, that Kubernetes can run anywhere like AWS, DigitalOcean, Azure etc. For more information check out the CoreOS Kubernetes tools.

Running the application

First, we have to prepare our application to work well with Kubernetes in a Docker environment.

If you are looking for a tutorial on how to start an app from scratch with Kubernetes check out their 101 tutorial.

Node.js app in Docker container

Kubernetes is Docker-based, so first we need to containerize our application. If you are not sure how to do that, check out our previous post: Dockerizing Your Node.js Application

If you are a private NPM user, you will also find this one helpful: Using the Private NPM Registry from Docker

"Procfile" in Kubernetes

We create one Docker image for every application (Git repository). If the repository contains multiple processes like: server, worker and clock we choose between them with an environment variable. Maybe you find it strange, but we don't want to build and push multiple Docker images from the very same source code, it would slow down our CI.

Environments, rollback, and service-discovery

Staging, production

During our PaaS period we named our services like trace-foo and trace-foo-staging, the only difference between the staging and production application was the name prefix and the different environment variables. In Kubernetes it's possible to define namespaces. Each namespace is totally independent from each other and doesn't share any resources like secrets, config, etc.

$ kubectl create namespace production
$ kubectl create namespace staging

Application versions

In a containerized infrastructure, each application version should be a different container image with a tag. We use the Git short hash as a Docker image tag.

foo:b37d759  
foo:f53a7cb  

To deploy a new version of your application, you only need to change the image tag in your application's deployment config, Kubernetes will do the rest.

Kubernetes Tutorial: The Deploy Flow (Deploy flow)

Any change in your deployment file is versioned and you can rollback to them anytime.

$ kubectl rollout history deployment/foo
deployments "foo":  
REVISION    CHANGE-CAUSE  
1           kubectl set image deployment/foo foo=foo:b37d759  
2           kubectl set image deployment/foo foo=foo:f53a7cb  

During our deploy process, we only replace Docker images which are quite fast - they only require a couple of seconds.

Service discovery

Kubernetes has a built-in simple service discovery solution: The created services expose their hostname and port as an environment variable for each pod.

const fooServiceUrl = `http://${process.env.FOO_SERVICE_HOST}:${process.env.FOO_SERVICE_PORT}`  

If you don't need advanced discovery, you can just start using it, instead of copying your service URLs to each other's environment variables. Kind of cool, isn't it?

Production ready application

The reallly challenging part of jumping into a new technology is to know what you need to be production ready. In the following section we will check what you should consider to set up in your app.

Zero downtime deployment and failover

Kubernetes can update your application in a way that it always keeps some pods running and deploy your changes in smaller steps - instead of stopping and starting all of them at the same time.

It’s not just helpful to prevent zero downtime deploys; it also avoids killing your whole application when you misconfigure something. Your mistake stops escalating to all of the running pods after Kubernetes detects that your new pods are unhealthy.

Kubernetes supports several strategies to deploy your applications. You can check them in the Deployment strategy documentation.

Graceful stop

It’s not mainly related to Kubernetes, but it’s impossible to have a good application lifecycle without starting and stopping your process in a proper way.

Start server

const server = MyServer()  
Promise.all([  
   db1.connect()
   db2.connect()
])
  .then() => server.listen(3000))

Gracefull server stop

process.on('SIGTERM', () => {  
  server.close()
    .then() => Promise.all([
      db1.disconnect()
      db2.disconnect()
    ])
   .then(() => process.exit(0))
   .catch((err) => process.exit(-1))
})

Liveness probe (health check)

In Kubernetes, you should define health check (liveness probe) for your application. With this, Kubernetes will be able to detect when your application needs to be restarted.

Web server health check

You have multiple options to check your applications health, but I think the easiest one is to create a GET /healthz endpoint end check your applications logic / DB connections there. It’s important to mention that every application is different, only you can know what checks are necessary to make sure it’s working.

app.get('/healthz', function (req, res, next) {  
  // check my health
  // -> return next(new Error('DB is unreachable'))
  res.sendStatus(200)
})
livenessProbe:  
    httpGet:
      # Path to probe; should be cheap, but representative of typical behavior
      path: /healthz
      port: 3000
    initialDelaySeconds: 30
    timeoutSeconds: 1
Worker health check

For our workers we also set up a very small HTTP server with the same /healthz endpoint which checks different criteria with the same liveness probe. We do it to have company-wide consistent health check endpoints.

Readiness probe

The readiness probe is similar to the liveness probe (health check), but it makes sense only for web servers. It tells the Kubernetes service (~load balancer) that the traffic can be redirected to the specific pod.

It is essential to avoid any service disruption during deploys and other issues.

readinessProbe:  
    httpGet:
      # You can use the /healthz or something else
      path: /healthz
      port: 3000
    initialDelaySeconds: 30
    timeoutSeconds: 1

Logging

For logging, you can choose from different approaches, like adding side containers to your application which collects your logs and sends them to custom logging solutions, or you can go with the built-in Google Cloud one. We selected the built-in one.

To be able to parse the built in log levels (severity) on Google Cloud, you need to log in the specific format. You can achieve this easily with the winston-gke module.

// setup logger
cons logger = require(‘winston’)  
cons winstonGke = require(‘winston-gke’)  
logger.remove(logger.transports.Console)  
winstonGke(logger, config.logger.level)

// usage
logger.info(‘I\’m a potato’, { foo: ‘bar’ })  
logger.warning(‘So warning’)  
logger.error(‘Such error’)  
logger.debug(‘My debug log)

If you log in the specific format, Kubernetes will automatically merge your log messages with the container, deployment, etc. meta information and Google Cloud will show it in the right format.

Your applications first log message has to be in the right format, otherwise it won’t start to parse it correctly.

To achieve this, we turned our npm start to silent, npm start -s in a Dockerfile: CMD ["npm", "start", "-s"]

Monitoring

We check our applications with Trace which is optimized from scratch to monitor and visualize micro-service architectures. The service map view of Trace helped us a lot during the migration to understand which application communicates with which one and what are the database and external dependencies.

Kubernetes Tutorial: Services in our infrastructure (Services in our infrastructure)

Since Trace is environment independent, we didn't have to change anything in our codebase, and we could use it to validate the migration and our expectations about the positive performance changes.

Kubernetes tutorial : Response times after the migration (Stable and fast response times)

Example

Check out our all together example repository for Node.js with Kubernetes and CircleCI:
https://github.com/RisingStack/kubernetes-nodejs-example

Tooling

Continuous deployment with CI

It's possible to update your Kubernetes deployment with a JSON path, or update only the image tag. After you have a working kubectl on your CI machine, you only need to run this command:

$ kubectl --namespace=staging set image deployment/foo foo=foo:GIT_SHORT_SHA

Debugging

In Kubernetes it's possible to run a shell inside any container, it's this easy:

$ kubectl get pod

NAME           READY     STATUS    RESTARTS   AGE  
foo-37kj5   1/1       Running   0          2d

$ kubectl exec foo-37kj5 -i -t -- sh
# whoami       
root  

Another useful thing is to check the pod events with:

$ kubectl describe pod foo-37kj5

You can also get the log message of any pod with the:

$ kubectl log foo-37kj5

Code piping

At our PaaS provider, we liked code piping between staging and production infrastructure. In Kubernetes we missed this, so we built our own solution.

It's a simple npm library which reads the current image tag from staging and sets it on the production deployment config.

Because the Docker container is the very same, only the environment variable changes.

SSL termination (https)

Kubernetes services are not exposed as https by default, but you can easily change this. To do so, read how to expose your applications with TLS in Kubernetes.

Conclusion

To summarize our experience with Kubernetes: we are very satisfied with it.

"Kubernetes helped us to improve our response times and failover + reduced our costs" via @RisingStack #kubernetes

Click To Tweet

We improved our applications response time in our micro-service architecture. We managed to raise security to the next level with the private network (VPC) between apps.

Also, we reduced our costs and improved the failover with the built-in rolling update strategy and liveness, readiness probes.

If you are in a state when you need to think about your infrastructure's future, you should definitely take Kubernetes into consideration!

If you have questions about migrating to Kubernetes from a PaaS, feel free to post them in the comment section.

Node Hero - Node.js Security Tutorial

This article is the 11th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this Node.js security tutorial, you are going to learn how to defend your applications against the most common attack vectors.

Upcoming and past chapters:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial [you are reading it now]
  12. How to Deploy Node.js Applications
  13. Monitoring and operating Node.js applications

Node.js Security threats

Nowadays we see almost every week some serious security breaches, like in the LinkedIn or MySpace cases. During these attacks, a huge amount of user data was leaked - as well as corporate reputations damaged.

Studies also show that security related bug tickets are open for an average of 18 months in some industries.

We have to fix this attitude. If you develop software, security is a part of your job.

"Fact: Security is a part of your job as a Node.js developer." via @RisingStack #nodejs #security

Click To Tweet

Start the Node.js Security Tutorial

Let's get started, and secure our Node.js application by proper coding, tooling, and operation!

Secure Coding Style

Rule 1: Don't use eval

Eval can open up your application for code injection attacks. Try not to use it, but if you have to, never inject unvalidated user input into eval.

Eval is not the only one you should avoid - in the background each one of the following expressions uses eval:

  • setInterval(String, 2)
  • setTimeout(String, 2)
  • new Function(String)

Rule 2: Always use strict mode

With 'use strict' you can opt in to use a restricted "variant" of JavaScript. It eliminates some silent errors and will throw them all the time.

'use strict'  
delete Object.prototype  
// TypeError
var obj = {  
    a: 1, 
    a: 2 
} 
// syntax error

Rule 3: Handle errors carefully

During different error scenarios, your application may leak sensitive details about the underlying infrastructure, like: X-Powered-By:Express.

Stack traces are not treated as vulnerabilities by themselves, but they often reveal information that can be interesting to an attacker. Providing debugging information as a result of operations that generate errors is considered a bad practice. You should always log them, but never show them to the users.

"Always log errors, but never show them to users." via @RisingStack #nodejs #security

Click To Tweet

Rule 4: Do a static analysis of your codebase

Static analysis of your application's codebase can catch a lot of errors. For that we suggest using ESLint with the Standard code style.

Running Your Services in Production Securely

Using proper code style is not enough to efficiently secure Node.js applications - you should also be careful about how you run your services in production.

Rule 5: Don't run your processes with superuser rights

Sadly, we see this a lot: developers are running their Node.js application with superuser rights, as they want it to listen on port 80 or 443.

This is just wrong. In the case of an error/bug, your process can bring down the entire system, as it has credentials to do anything.

Instead of this, what you can do is to set up an HTTP server/proxy to forward the requests. This can be nginx or Apache. Check out our article on Operating Node.js in Production to learn more.

Rule 6: Set up the obligatory HTTP headers

There are some security-related HTTP headers that your site should set. These headers are:

  • Strict-Transport-Security enforces secure (HTTP over SSL/TLS) connections to the server
  • X-Frame-Options provides clickjacking protection
  • X-XSS-Protection enables the Cross-site scripting (XSS) filter built into most recent web browsers
  • X-Content-Type-Options prevents browsers from MIME-sniffing a response away from the declared content-type
  • Content-Security-Policy prevents a wide range of attacks, including Cross-site scripting and other cross-site injections

In Node.js it is easy to set these using the Helmet module:

var express = require('express')  
var helmet = require('helmet')

var app = express()

app.use(helmet())  

Helmet is available for Koa as well: koa-helmet.

Rule 7: Do proper session management

The following list of flags should be set for each cookie:

  • secure - this attribute tells the browser to only send the cookie if the request is being sent over HTTPS.
  • HttpOnly - this attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via JavaScript.

Rule 8: Set cookie scope

  • domain - this attribute is used to compare against the domain of the server in which the URL is being requested. If the domain matches or if it is a sub-domain, then the path attribute will be checked next.
  • path - in addition to the domain, the URL path that the cookie is valid for can be specified. If the domain and path match, then the cookie will be sent in the request.
  • expires - this attribute is used to set persistent cookies since the cookie does not expire until the set date is exceeded.

In Node.js you can easily create this cookie using the cookies package. Again, this is quite low
-level, so you will probably end up using a wrapper, like the cookie-session.

var cookieSession = require('cookie-session')  
var express = require('express')

var app = express()

app.use(cookieSession({  
  name: 'session',
  keys: [
    process.env.COOKIE_KEY1,
    process.env.COOKIE_KEY2
  ]
}))

app.use(function (req, res, next) {  
  var n = req.session.views || 0
  req.session.views = n++
  res.end(n + ' views')
})

app.listen(3000)  

(The example is taken from the cookie-session module documentation.)

Tools to Use

Congrats, you’re almost there! If you followed this tutorial and did the previous steps thoroughly, you have just one area left to cover regarding Node.js security. Let’s dive into using the proper tools to look for module vulnerabilities!

"Always look for vulnerabilities in your #nodejs modules. You are what you require." via @RisingStack #security

Click To Tweet

Rule 9: Look for vulnerabilities with Retire.js

The goal of Retire.js is to help you detect the use of module versions with known vulnerabilities.

Simply install with:

npm install -g retire  

After that, running it with the retire command will look for vulnerabilities in your node_modules directory. (Also note, that retire.js works not only with node modules but with front end libraries as well.)

Rule 10: Audit your modules with the Node Security Platform CLI

nsp is the main command line interface to the Node Security Platform. It allows for auditing a package.json or npm-shrinkwrap.json file against the NSP API to check for vulnerable modules.

npm install nsp --global  
# From inside your project directory
nsp check  

Next up

Node.js security is not a big deal after all is it? I hope you found these rules to be helpful for securing your Node.js applications - and will follow them in the future since security is a part of your job!

If you’d like to read more on Node.js security, I can recommend these articles to start with:

In the next chapter of Node Hero, you are going to learn how to deploy your secured Node.js application, so people can actually start using it!

If you have any questions or recommendations for this topic, write them in the comments section.

(twitter card photo credit: www.perspecsys.com)

Writing a JavaScript Framework - Project Structuring

In the last couple of months Bertalan Miklos, JavaScript engineer at RisingStack wrote a next generation client-side framework, called NX. In the Writing a JavaScript Framework series, Bertalan shares what he learned during the process:

In this chapter, I am going to explain how NX is structured, and how I solved its use case specific difficulties regarding extendibility, dependency injection and private variables.

The series includes the following chapters.

  1. Project structuring (current chapter)
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding (part 1)
  5. Data binding (part 2)
  6. Custom elements
  7. Client side routing

Project Structuring

There is no structure that fits all projects, although there are some general guidelines. Those who are interested can check out our Node.js project structure tutorial from the Node Hero series.

An overview of the NX JavaScript Framework

NX aims to be an open-source community driven project, which is easy to extend and scales well.

  • It has all the features expected from a modern client-side framework.
  • It has no external dependencies, other than polyfills.
  • It consists around 3000 lines altogether.
  • No module is longer than 300 lines.
  • No feature module has more than 3 dependencies.

Its final dependency graph looks like this:

JavaScript Framework in 2016: The NX project structure

This structure provides a solution for some typical framework related difficulties.

  • Extendibility
  • Dependency injection
  • Private variables

Achieving Extendibility

Easy extendibility is a must for community driven projects. To achieve it, the project should have a small core and a predefined dependency handling system. The former ensures that it is understandable, while the latter ensures that it will stay that way.

In this section, I focus on having a small core.

The main feature expected from modern frameworks is the ability to create custom components and use them in the DOM. NX has the single component function as its core, and that does exactly this. It allows the user to configure and register a new component type.

component(config)  
  .register('comp-name')

The registered comp-name is a blank component type which can be instantiated inside the DOM as expected.

<comp-name></comp-name>  

The next step is to ensure that the components can be extended with new features. To keep both simplicity and extendibility, these new features should not pollute the core. This is where dependency injection comes handy.

Dependency Injection (DI) with Middlewares

If you are unfamiliar with dependency injection, I suggest you to read our article on the topic : Dependency Injection in Node.js.

Dependency injection is a design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object.

DI removes hard burnt dependencies but introduces a new problem. The user has to know how to configure and inject all the dependencies. Most client-side frameworks have DI containers doing this instead of the user.

A Dependency Injection Container is an object that knows how to instantiate and configure objects.

Another approach is the middleware DI pattern, which is widely used on the server side (Express, Koa). The trick here is that all injectable dependencies (middlewares) have the same interface and can be injected the same way. In this case, no DI container is needed.

I went with this solution to keep simplicity. If you ever used Express the below code will be very familiar.

component()  
  .use(paint) // inject paint middleware
  .use(resize) // inject resize middleware
  .register('comp-name')

function paint (elem, state, next) {  
  // elem is the component instance, set it up or extend it here
  elem.style.color = 'red'
  // then call next to run the next middleware (resize)
  next()
}

function resize (elem, state, next) {  
  elem.style.width = '100 px'
  next()
}

Middlewares execute when a new component instance is attached to the DOM and typically extend the component instance with new features. Extending the same object by different libraries leads to name collisions. Exposing private variables deepens this problem and may cause accidental usage by others.

Having a small public API and hiding the rest is a good practice to avoid these.

Handling privacy

Privacy is handled by function scope in JavaScript. When cross-scope private variables are required, people tend to prefix them with _ to signal their private nature and expose them publicly. This prevents accidental usage but doesn't avoid name collisions. A better alternative is the ES6 Symbol primitive.

A symbol is a unique and immutable data type, that may be used as an identifier for object properties.

The below code demonstrates a symbol in action.

const color = Symbol()

// a middleware
function colorize (elem, state, next) {  
  elem[color] = 'red'
  next()
}

Now 'red' is only reachable by owning a reference to the color symbol (and the element). The privacy of 'red' can be controlled by exposing the color symbol to different extents. With a reasonable number of private variables, having a central symbol storage is an elegant solution.

// symbols module
exports.private = {  
  color: Symbol('color from colorize')
}
exports.public = {}  

And an index.js like below.

// main module
const symbols = require('./symbols')  
exports.symbols = symbols.public  

The storage is accessible inside the project for all modules, but the private part is not exposed to the outside. The public part can be used to expose low-level features to external developers. This prevents accidental usage since the developer has to explicitly require the needed symbol to use it. Moreover, symbol references can not collide like string names, so name collision is impossible.

The points below summarize the pattern for different scenarios.

1. Public variables

Use them normally.

function (elem, state, next) {  
  elem.publicText = 'Hello World!'
  next()
}

2. Private variables

Cross-scope variables, that are private to the project should have a symbol key added to the private symbol registry.

// symbols module
exports.private = {  
  text: Symbol('private text')
}
exports.public = {}  

And required from it when needed somewhere.

const private = require('symbols').private

function (elem, state, next) {  
  elem[private.text] = 'Hello World!'
  next()
}

3. Semi-private variables

Variables of the low level API should have a symbol key added to the public symbol registry.

// symbols module
exports.private = {  
  text: Symbol('private text')
}
exports.public = {  
  text: Symbol('exposed text')
}

And required from it when needed somewhere.

const exposed = require('symbols').public

function (elem, state, next) {  
  elem[exposed.text] = 'Hello World!'
  next()
}

Conclusion

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss execution timing!

If you have any thoughts on the topic, share it in the comments.

Node Hero - Debugging Node.js Applications

This article is the 10th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this tutorial, you are going to learn debugging your Node.js applications using the debug module, the built-in Node debugger and Chrome's developer tools.

Upcoming and past chapters:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications [you are reading it now]
  11. Node.js Security Tutorial
  12. How to Deploy Node.js Applications
  13. Monitoring and operating Node.js applications

Bugs, debugging

The term bug and debugging have been a part of engineering jargon for many decades. One of the first written mentions of bugs is as follows:

It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that "Bugs" — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.

Thomas Edison


Debugging Node.js Applications

One of the most frequently used approach to find issues in Node.js applications is the heavy usage of console.log for debugging.

"Console.log is efficient for debugging small snippets but we recommend better alternatives!” @RisingStack #nodejs

Click To Tweet

Let's take a look at them!

The debug module

Some of the most popular modules that you can require into your project come with the debug module. With this module, you can enable third-party modules to log to the standard output, stdout. To check whether a module is using it, take a look at the package.json file's dependency section.

To use the debug module, you have to set the DEBUG environment variable when starting your applications. You can also use the * character to wildcard names. The following line will print all the express related logs to the standard output.

DEBUG=express* node app.js  

The output will look like this:

Logfile made with the Node debug module

The Built-in Node.js Debugger

Node.js includes a full-featured out-of-process debugging utility accessible via a simple TCP-based protocol and built-in debugging client.

To start the built-in debugger you have to start your application this way:

node debug app.js  

Once you have done that, you will see something like this:

using the built-in node.js debugger

Basic Usage of the Node Debugger

To navigate this interface, you can use the following commands:

  • c => continue with code execution
  • n => execute this line and go to next line
  • s => step into this function
  • o => finish function execution and step out
  • repl => allows code to be evaluated remotely

You can add breakpoints to your applications by inserting the debugger statement into your codebase.

function add (a, b) {  
  debugger
  return a + b
}

var res = add('apple', 4)  


Watchers

It is possible to watch expression and variable values during debugging. On every breakpoint, each expression from the watchers list will be evaluated in the current context and displayed immediately before the breakpoint's source code listing.

To start using watchers, you have to define them for the expressions you want to watch. To do so, you have to do it this way:

watch('expression')  

To get a list of active watchers type watchers, to unwatch an expression use unwatch('expression').

Pro tip: you can switch running Node.js processes into debug mode by sending the SIGUSR1 command to them. After that you can connect the debugger with node debug -p <pid>.

"You can switch running #nodejs processes into debug mode by sending the SIGUSR1 command to them.” via @RisingStack

Click To Tweet

To understand the full capabilities of the built-in debugger, check out the official API docs: https://nodejs.org/api/debugger.html.

The Chrome Debugger

When you start debugging complex applications, something visual can help. Wouldn’t be great to use the familiar UI of the Chrome DevTools for debugging Node.js applications as well?

node.js chrome developer tools debugging

Good news, the Chrome debug protocol is already ported into a Node.js module and can be used to debug Node.js applications.

To start using it, you have to install node-inspector first:

npm install -g node-inspector  

Once you installed it, you can start debugging your applications by starting them this way:

node-debug index.js --debug-brk  

(the --debug-brk pauses the execution on the first line)

It will open up the Chrome Developer tools and you can start to debug your Node.js applications with it.

Next up

Debugging is not that hard after all, is it?

In the next chapter of Node Hero, you are going to learn how to secure your Node.js applications.

If you have any questions or recommendations for this topic, write them in the comments section.