Gergely Nemeth's Picture

Gergely Nemeth

Node.js and microservices, organizer of @Oneshotbudapest @nodebp @jsconfbp

85 posts

Upcoming Node.js, Microservices & Security Trainings in Europe

We are happy to announce that starting from September, three instructors from RisingStack will be traveling Europe to hold trainings on Node.js, Microservices, and Security.

The Node.js Trainings:

1, Node.js Fundamentals

During this training, you can learn how to set up a proper Node.js development environment, how to build applications using Express and PostgreSQL, how to manage asynchronous operations and how to test them. This training takes 2 days.

You can find the detailed training agenda here.

This training will be availabe in Vienna (September 21-22), Barcelona (October 5-6), Berlin (October 12-13), London (October 19-20), Dublin (October 26-27), Zurich (November 2-3), Paris (November 9-10) & Amsterdam (November 9-10).

2, Securing Node.js Applications

During this training, you will learn how to make your Node.js application more secure, and how to protect it against XSS, CSRF, SQL Injection or different DDOS attacks. The training takes two days.

You can find the detailed agenda here.

This training will be available in Vienna (September 28-29), Dublin (October 5-6), Amsterdam (October 12-13), Paris (October 19-20), Barcelona (October 26-27), Berlin (November 9-10), Zurich (November 16-17), London (November 23-24), Lisbon (November 30 - December 1)

3, Building Microservices with Node.js

In the Building Microservices with Node.js training, you will learn what Microservices are, and how you can implement them using Node.js. The training takes two days.

You can find the detailed agenda here.

This training will be available in Vienna (September 28-29), Dublin (October 5-6), Amsterdam (October 12-13), Paris (October 19-20), Barcelona (October 26-27), Berlin (November 9-10), Zurich (November 16-17), London (November 23-24), Lisbon (November 30 - December 1)

About the instructors

Czibik Peter

Peter Czibik joined RisingStack as one of the first employees. He has been helping companies adopt Node.js by holding training and consulting sessions for them. Peter will hold most of the Node.js Fundamentals trainings.

You can find Peter on Twitter under @peteyycz.

Tamas Kadlecsik

Tamas Kadlecsik has engineered some of the most complex parts of our Microservice architecture, gaining a lot of experience in designing such systems. Tamas will hold most of the Building Microservices with Node.js trainings.

You can find Tamas on Twitter under @bionazgul.

Gergely Nemeth

Gergely Nemeth is one of the founders of RisingStack and a long-time Node.js user. He is very keen on security, so he will hold most of the Securing Web Applications classes.

You can find Gergely on Twitter under @nthgergo.

Training Locations


Vienna Node.js Training

Reserve your spot in Vienna:


Dublin Node.js Training

Reserve your spot in Dublin:


Barcelona Node.js Training

Reserve your spot in Barcelona:


Berlin Node.js Training

Reserve your spot in Berlin:


Amsterdam Node.js Training

Reserve your spot in Amsterdam:


Zurich Node.js Training

Reserve your spot in Zurich:


London Node.js Training

Reserve your spot in London:


Paris Node.js Training

Reserve your spot in Paris:


Lisbon Node.js Training

Reserve your spot in Lisbon:

+1: Training Ambassador Program

With the trainings we are opening up our Ambassador program as well. Let's see how it works!

  1. After applying to the program, you will recieve a discount code.
  2. You can send the discount code to your developer friends.
  3. As soon as 3 people registers with that you get a free ticket.

To get more information and your discount code, please write en email to [email protected]

Convince your boss to send you to one of our trainings

We know that convincing your supervisors to send you to trainings or to conferences can sometimes be challenging. This is why we drafted emails which you can send your boss immediately.

Just fill out the following form, and we will generate the text for you:

Disclaimer: We won't spam you.

See You at Our Node.js Trainings!

We hope to see you at one of the locations we offer trainings at!

Please let us know if you miss your city from the list, and let's talk if we can make that work!

Building a Node.js App with TypeScript Tutorial

This tutorial teaches how you can build, structure, test and debug a Node.js application written in TypeScript. To do so, we use an example project which you can access anytime later.

Managing large-scale JavaScript projects can be challenging, as you need to guarantee that the pieces fit together. You can use unit tests, types (which JavaScript does not really have), or the two in combination to solve this issue.

This is where TypeScript comes into the picture. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.

In this article you will learn:

  • what TypeScript is,
  • what are the benefits of using Typescript,
  • how you can set up a project to start developing using it:
    • how to add linters,
    • how to write tests,
    • how to debug applications written in TypeScript

This article won't go into to details of using the TypeScript language itself, it focuses on how you can build Node.js applications using it. If you are looking for an in-depth TypeScript tutorial, I recommend checking out the TypeScript Gitbook.

The benefits of using TypeScript

As we already discussed, TypeScript is a superset of Javascript. It gives you the following benefits:

  • optional static typing, with emphasis on optional (it makes porting JavaScript application to TypeScript easy),
  • as a developer, you can start using ECMAScript features that are not supported by the current V8 engine by using build targets,
  • use of interfaces,
  • great tooling with instruments like IntelliSense.

Getting started with TypeScript & Node

TypeScript is a static type checker for JavaScript. This means that it will check for issues in your codebase using the information available on different types. Example: a String will have a toLowerCase() method, but not a parseInt() method. Of course, the type system of TypeScript can be extended with your own type definitions.

As TypeScript is a superset of JavaScript, you can start using it by literally just renaming your .js files to .ts, so you can introduce TypeScript gradually to your teams.

Note: TypeScript won't do anything in runtime, it works only during compilation time. You will run pure JavaScript files.

To get started with TypeScript, grab it from npm:

$ npm install -g typescript

Let's write our first TypeScript file! It will simply greet the person it gets as a parameter:

// greeter.ts
function greeter(person: string) {  
  return `Hello ${person}!`

const name = 'Node Hero'


One thing you could already notice is the string type annotation which tells the TypeScript compiler that the greeter function is expecting a string as its parameter.

Let's try to compile it!

tsc greeter.ts  

First, let's take a look at the compiled output! Ss you can see, there was no major change, only that the type annotations were removed:

function greeter(person) {  
    return "Hello " + person + "!";
var userName = 'Node Hero';  

What would happen if you'd change the userName to a Number? As you could guess, you will get a compilation error:

greeter.ts(10,21): error TS2345: Argument of type '3' is not assignable to parameter of type 'string'.  

Tutorial: Building a Node.js app with TypeScript

1. Set up your development environment

To build applications using TypeScript, make sure you have Node.js installed on your system. This article will use Node.js 8.

We recommend installing Node.js using nvm, the Node.js version manager. With this utility application, you can have multiple Node.js versions installed on your system, and switching between them is only a command away.

# install nvm
curl -o- | bash

# install node 8
nvm install 8

# to make node 8 the default
nvm alias default 8  

Once you have Node.js 8 installed, you should create a directory where your project will live. After that, create your package.json file using:

npm init  

2. Create the project structure

When using TypeScript, it is recommended to put all your files under an src folder.

At the end of this tutorial, we will end up with the following project structure:

Node.js TypeScript Tutorial - Example Application Project Structure

Let's start by adding the App.ts file - this will be the file where your web server logic will be implemented, using express.

In this file, we are creating a class called App, which will encapsulate our web server. It has a private method called mountRoutes, which mounts the routes served by the server. The express instance is reachable through the public express property.

import * as express from 'express'

class App {  
  public express

  constructor () { = express()

  private mountRoutes (): void {
    const router = express.Router()
    router.get('/', (req, res) => {
        message: 'Hello World!'
    })'/', router)

export default new App().express  

We are also creating an index.ts file, so the web server can be fired up:

import app from './App'

const port = process.env.PORT || 3000

app.listen(port, (err) => {  
  if (err) {
    return console.log(err)

  return console.log(`server is listening on ${port}`)

With this - at least in theory - we have a functioning server. To actually make it work, we have to compile our TypeScript code to JavaScript.

For more information on how to structure your project, read our Node.js project structuring article.

3. Configuring TypeScript

You can pass options to the TypeScript compiler by either by using the CLI, or a special file called tsconfig.json. As we would like to use the same settings for different tasks, we will go with the tsconfig.json file.

By using this configuration file, we are telling TypeScript things like the build target (can be ES5, ES6, and ES7 at the time of this writing), what module system to expect, where to put the build JavaScript files, or whether it should create source-maps as well.

  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "outDir": "dist",
    "sourceMap": true
  "files": [
    ".[email protected]/mocha/index.d.ts",
    ".[email protected]/node/index.d.ts"
  "include": [
  "exclude": [

Once you added this TypeScript configuration file, you can build your application using the tsc command.

If you do not want to install TypeScript globally, just add it to the dependency of your project, and create an npm script for it: "tsc": "tsc".

This will work, as npm scripts will look for the binary in the ./node_modules/.bin folder, and add it to the PATH when running scripts. Then you can access tsc using npm run tsc. Then, you can pass options to tsc using this syntax: npm run tsc -- --all (this will list all the available options for TypeScript).

Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!

4. Add ESLint

As with most projects, you want to have linters to check for style issues in your code. TypeScript is no exception.

To use ESLint with TypeScript, you have to add an extra package, a parser, so ESLint can understand Typescript as well: typescript-eslint-parser. Once you installed it, you have to set it as the parser for ESLint:

# .eslintrc.yaml
  extends: airbnb-base
    node: true
    mocha: true
    es6: true
  parser: typescript-eslint-parser
    sourceType: module
      modules: true

Once you run eslint src --ext ts, you will get the same errors and warnings for your TypeScript files that you are used to:

Node.js TypeScript Tutorial - Console Errors

5. Testing your application

Testing your TypeScript-based applications is essentially the same as you would do it with any other Node.js applications.

The only gotcha is that you have to compile your application before actually running the tests on them. Achieving it is very straightforward, you can simply do it with: tsc && mocha dist/**/*.spec.js.

For more on testing, check out our Node.js testing tutorial.

6. Build a Docker image

Once you have your application ready, most probably you want to deploy it as a Docker image. The only extra steps you need to take are:

  • build the application (compile from TypeScript to JavaScript),
  • start the Node.js application from the built source.
FROM risingstack/alpine:3.4-v6.9.4-4.2.0



COPY package.json package.json  
RUN npm install

COPY . .  
RUN npm run build

CMD ["node", "dist/"]  

7. Debug using source-maps

As we enabled generating source-maps, we can use them to find bugs in our application. To start looking for issues, start your Node.js process the following way:

node --inspect dist/  

This will output something like the following:

To start debugging, open the following URL in Chrome:  
    chrome-devtools:[email protected]84980/inspector.html?experiments=true&v8only=true&ws=
server is listening on 3000  

To actually start the debugging process, open up your Google Chrome browser and browse to chrome://inspect. A remote target should already be there, just click inspect. This will bring up the Chrome DevTools.

Here, you will instantly see the original source, and you can start putting breakpoints, watchers on the TypeScript source code.

Node.js TypeScript Tutorial - Debugging in Chrome

The source-map support only works with Node.js 8 and higher.

The Complete Node.js TypeScript Tutorial

You can find the complete Node.js TypeScript starter application on GitHub.

Let us know in the issues, or here in the comments what would you change!

Survey: Node.js Developers Struggle with Debugging & Downtimes

In this article we summarize the insights we learned from our latest survey on developers problems with Node.js Debugging, Downtimes, Microservices & other pain-points.

In the second part of the article, we attempt to provide help with the greatest Node.js issues we discovered.

A survey conducted by RisingStack, the company known for its Node.js Monitoring Solution and Node.js Services shows that 1/3rd of Node.js Developers experience downtimes in production systems at least once a week.

Node.js Survey by RisingStack - Downtimes

The survey also shows that 43% of the developers spend more than 2 hours a week debugging their Node.js Applications, including the 17% who spends more than 5 hours a week fixing errors in production systems.

Node.js Survey by RisingStack - Time spent with Debugging

About the Node.js Survey

The survey was conducted via an online form between 13-27 April. It was completed by 746 developers of which 509 claims to have Node.js applications in Production systems. The survey focuses solely on their problems, and this report is based on their answers.

The participants of the survey are subscribers of RisingStack’s Node.js Blog, who were asked to participate through email.

The Greatest Pain-Points of Node.js Development

The survey provides valuable insights into the problems Node developers face on a daily basis since we asked them to describe what’s the greatest challenge they face with their applications.

The biggest hurdle that developers who use Node.js face is debugging their applications (18% of all respondents). According to the data, nothing comes close to the seriousness of debugging microservices and bugs in production.

Node.js Survey by RisingStack - Top Node.js Problems Developers Face

Click here to Open High-Res version

The second tier of errors contains categories like memory issues (mainly memory leaks), difficulties of scaling & monitoring Node.js apps and getting deployments right (especially in high-scale production systems). There are also problems with Async programming, error handling, and managing dependencies. Each of these problems were pinpointed as the most severe by 6-8% of developers.

The third tier of problems are still major ones. Developers mention Microservices (communication, integration, management), callbacks, security (including authentication issues) and downtimes (handling downtimes and restarts) a lot. They also face troubles keeping up with the fast-changing Node.js environment, improving the performance of their apps, and Tracing issues.

The last tier of problems named by the respondents of this survey were relatively minor. They include the usage of promises, the lack of standards, the hardship of project structuring, testing & threading. Developers also mention the blocked event-loop, learning Node.js, logging, and configuration as their biggest challenges with Node.

How do Microservices Affect Node.js Systems?

Another interesting insight can be gained from the survey if we compare people who have a microservices-based application to those who said they don’t:

  • There’s a slight increase in hours spent with debugging among those developers who are building a microservices architecture with Node compared to those who work on monoliths. 44,4% spends more than two hours with debugging per week, compared to 41,2%.

  • There’s a significant difference in avoiding downtimes if we compare developers with Microservices architectures to the ones we don’t. It looks like that 29,3% of Microservices developers never experience downtimes, compared to 25,6% of developers who do not build a distributed application.

Node.js Survey by RisingStack - Microservices vs no Microservices

Understanding the Data

Although the data we collected is not representative, it underpins the experience we gained during our consulting & training projects at RisingStack.

Building a microservices architecture in an unprepared company can bring enormous complexity on an engineering level and on the management level as well. We see that there are several costly mistakes which developers constantly make, which can easily lead to chaos.

We recommend breaking down monolithic applications intro microservices in a lot of cases, but it’s undeniable that finding and fixing bugs in a distributed system is much harder thanks to the myriad of services throwing errors all the time.

Experiencing fewer downtimes with microservices is one of the main benefits of the technology since the parts of an app can fail individually (and often unnoticed), keeping larger systems intact.

How to Overcome the Greatest Obstacles with Node.js

If the data we collected resonates with your Node.js development experience, you might need help to solve your issues.

Below we collected some of the best resources you can use to tackle the most common difficulties with Node.js:

⬢ Node.js Debugging

⬢ Node.js Memory Leaks & Issues

⬢ Scaling up Your Application

⬢ Monitoring Node.js Apps

⬢ Help with Deployment

⬢ Asynchronous Programming in Node.js

⬢ Node.js Error Handling

Professional help with Node.js

In case you need an agile team who can take ownership of your Node.js projects and help you scale your application, check out our Node.js Consulting & Development Services!

We can help you with:

  • Digital Transformation Projects
  • Architectural Consulting
  • Training and Education
  • Outsourced Development
  • Co-development

Technologies we love to use:

  • Node.js
    • Containers & Cloud
    • Microservices
    • Docker
    • Kubernetes
  • AWS and Google Cloud React

Key Findings of the Node.js Survey

  • 29,27% of Node.js developers experience downtimes in production systems at least once a week, 54,02% at least once a month.

  • 27,50% of Node developers responding to the survey never experience downtimes.

  • 42,82% of the respondents spend more than 2 hours a week with debugging their Node.js applications, including the 17,09% who spends more than 5 hours.

  • The developers building a microservices architecture with Node spend more time with debugging. The advantage of microservices + Node manifests in the form of fewer downtimes.

Announcement: On-premises Node.js Monitoring Tool

Trace, RisingStack's Node.js Monitoring Tool helps developers and operation teams to debug & monitor Node.js infrastructures with ease.

Node.js Monitoring with Trace by RisingStack - Performance Metrics tab

Why would you need on-premises Node.js Monitoring?

Security and privacy should always be the priority of a Node.js developer, still, sometimes it's not enough to follow the Node security best practices..

Different industries have different compliance requirements (like HIPAA or PCI) when handling customer data. It means that information can never leave the Virtual Private Cloud / Datacenter of the given company.

Monitoring & debugging systems like that is impossible with a SaaS product, which means that enterprises constantly end up needing a Node.js monitoring solution that they can host for themselves.

With that in mind, we are happy to announce that Trace is now also available as an on-premises solution. To learn more, check out our On-premises Node.js Monitoring Page.

With this milestone, Trace can be entirely self-hosted in any AWS, Azure or GCP datacenter.

It can help you with:

  • gaining visibility into Node.js deployments,
  • auto-discovering your infrastructure,
  • avoiding costly downtimes,
  • meeting SLAs expected by business owners.

Node.js Monitoring - Memory Leak & CPU Profiling

What's Next

As always, we’re happy to hear your thoughts on Trace On-premises - feel free to get in touch or reach out to us on Twitter at @risingstack.

The Important Features and Fixes of Node.js Version 8

With the release of Node.js Version 8 (happening on 12 PM PST 30 May), we got the latest LTS (long-term support) variant with a bunch of new features and performance improvements.

In this post, we'll go through the most important features and fixes of the new Node.js 8 release.

The codename of the new release is Carbon. Node 8 will become the current LTS version from October 2017 and will be maintained till December 31st, 2019. This also means that Node.js Version 6 will go into maintenance mode in April 2018, and reach the end of life in April 2019.

You can grab the nightly releases from here:

Introducing the Async Hooks API

The Async Hooks (previously called AsyncWrap) API allows you to get structural tracing information about the life of handle objects.

The API emits events that inform the consumer about the life of all handle objects in Node.js. It tries to solve similar challenges as the continuation-local-storage npm package, just in the core.

If you are using continuation-local-storage, there is already a drop-in replacement that uses async hooks, called cls-hooked - but currently, it is not ready for prime time, so use it with caution!

How The Async Hooks API Works in Node.js Version 8

The createHooks function registers functions to be called for different lifetime events of each async operation.

const asyncHooks = require('async_hooks')


These functions will be fired based on the lifecycle event of the handler objects.

Read more on Async Hooks, or check the work-in-progress documentation.

Introducing the N-API

The N-API is an API for building native addons. It is independent of the underlying JavaScript runtime and is maintained as part of Node.js itself. The goal of this project is to keep the Application Binary Interface (ABI) stable across different Node.js versions.

The purpose of N-API is to separate add-ons from changes in the underlying JavaScript engine so that native add-ons can run with different Node.js versions without recompilation.

Read more on the N-API.

Buffer security improvements in Node 8

Before Node.js version 8, Buffers allocated using the new Buffer(Number) constructor did not initialize the memory space with zeros. As a result, new Buffer instances could contain sensitive information, leading to security problems.

While it was an intentional decision to boost the performance of new Buffer creation, for most of us, it was not the intended use. Because of this, starting with Node.js 8, Buffers allocated using new Buffer(Number) or Buffer(Number) will be automatically filled with zeros.

Need help with migrating a large-scale application to Node.js version 8?

Ask our experts

Upgrade V8 to 5.8: Preparing for TurboFan & Ingnition

With Node.js Version 8, the underlying V8 JavaScript engine gets updated as well.

The biggest change it brings to Node.js users is that it will make possible the introduction of TurboFan and Ignition in V8 5.9. Ignition is V8's interpreter, while TurboFan is the optimizing compiler.

"The combined Ignition and TurboFan pipeline has been in development for almost 3½ years. It represents the culmination of the collective insight that the V8 team has gleaned from measuring real-world JavaScript performance and carefully considering the shortcomings of Full-codegen and Crankshaft. It is a foundation with which we will be able to continue to optimize the entirety of the JavaScript language for years to come." - Daniel Clifford and the V8 team

Currently (well, with V8 versions older than 5.6, so anything below Node.js version 8) this is how the V8 compilation pipeline looks

Node.js Version 8 - V8 old pipeline Photo credit: Benedikt Meurer

The biggest issue with this pipeline is that new language features must be implemented in different parts of the pipeline, adding a lot of extra development work.

This is how the simplified pipeline looks, without the FullCode Generator and the Crankshaft:

Node.js Version 8 - V8 new pipeline Photo credit: Benedikt Meurer

This new pipeline significantly reduces the technical debt of the V8 team, and enables a lot of improvements which were impossible previously.

Read more on TurboFan and Ignition and the TurboFan Inlining Heuristics .

Upgrade npm to 5.0.0

The new Node.js 8 release also ships with npm 5 - the newest version of the npm CLI.

Highlights of this new npm release:

  • A new, standardized lockfile feature meant for cross-package-manager compatibility (package-lock.json), and a new format and semantics for shrinkwrap,
  • --save is no longer necessary as all installs will be saved by default,
  • node-gyp now supports node-gyp.cmd on Windows,
  • new publishes will now include both sha512 and sha1 checksums.

Other notable changes in Node.js Version 8


  • Buffer methods now accept Uint8Array as input

Child Process

  • Argument and kill signal validations have been improved
  • Child Process methods accept Uint8Array as input


  • Error events emitted when using console methods are now suppressed


  • Native Promise instances are now Domain aware

File System

  • The utility class fs.SyncWriteStream has been deprecated
  • The deprecated string interface has been removed


  • Outgoing Cookie headers are concatenated into a single string
  • The httpResponse.writeHeader() method has been deprecated


  • Stream now supports destroy() and _destroy() APIs


  • The rejectUnauthorized option now defaults to true


  • The WHATWG URL implementation is now a fully-supported Node.js API

Next Up with Node.js version 8

Node.js version 8 surprises us with a lot of interesting improvements, including the Async Hooks API which is hard to grasp with the current (but ever evolving) state of it's documentation. We'll start playing with the new release ASAP, and get back to you with more detailed explanations of these features soon.

If you have any questions in the meantime, please put them in the comments section below.

Getting Started with AWS Lambda & Node.js

In this article we will discuss what serverless programming is, and how to get started with AWS Lambda as a Node.js Developer.

Since the launch of AWS Lambda back in 2014, serverless (or FaaS - Function as a Service) computing became more and more popular. It lets you concentrate on your applications' functionality by not having to worry about your infrastructure.

In the past years, most of the cloud providers started to offer their own version of serverless: Microsoft launched Azure Functions while Google launched Cloud Functions. IBM released an open-source version of serverless, called OpenWhisk.

Serverless Computing

Serverless is a type of event-driven architecture - functions are reacting to specific type of trigger events. Speaking of AWS, it can be an event fired by S3, SNS or the API Gateway just to name a few.

Lifecycle events of AWS Lambda functions

This environment is provided with the resources specified in the functions' configuration (like memory size).

AWS Lambda handles differently what happens when you call a Lambda function for the first time, and consequent calls to the same Lambda function.

Calling a new Lambda function for the first time

When you deploy your Lambda function (or update an existing one), a new container will be created for it.

Your code will be moved into the container, and the initialization code will run before the first request arrives to the exposed handler function.

The Lambda function can complete in one of the following ways:

  • timeout - the timeout specified by the user has been reached (defaults to 5 seconds as of now),
  • controlled termination - the callback of handler function is called,
  • default termination - if all callbacks finished execution (even without the callback of the handler function is called),
  • crashing the process.

Consequent calls to an existing Lambda function

For the next calls, Lambda may decide to create new containers to serve your requests. In this case, the same process will happen as described above, with initialization.

However, if you have not changed your Lambda function and only a little time passed since the last call, Lambda may reuse the container. This way it saves the initialization time required to spin up the new container and your code inside it.

Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!

Building your first function

Now as we're done discussing the lifecycle events of Lambda functions it is time to take a look at some actual Lambda function implementations!

The Anatomy of an AWS Lambda function (in Node.js)

/* Initialization part starts here */
const mysql      = require('mysql')  
const connection = mysql.createConnection({  
  host     : process.env.MYSQL_HOST,
  user     : process.env.MYSQL_USER,
  password : process.env.MYSQL_PASSWORD,
  database : process.env.MYSQL_DB
/* Initialization part ends here */

/* Handler function starts here */
exports.handler = (event, context, callback) => {  
  const sql = 'SELECT * FROM users WHERE id = ' + connection.escape(event.userId)
  connection.query(sql, function (error, results, fields) {
    if (error) {
      return callback(error)
    callback(null, results)
/* Handler function ends here */

Let's examine what you can see in the example above:

  • initialization - this is the part of the code snippet that will only run once per container creation. This is a good place to create database connections.
  • handler function - this function will be called every time your Lambda function is executed.
    • event - this variable is used by Lambda to pass in event data to the handler (like an HTTP request).
    • context - the context variable is used to pass in runtime information for the Lambda function, like how much time is remaining before the function will be terminated.
    • callback - By using it, you can explicitly return data to the caller (like an HTTP response)

Deploying AWS functions

As now we have a working Lambda function, it's time to deploy it.

This is where your troubles begin! But Why?

By default, to make AWS Lambda work with HTTP for example, you'd have to manage not just to AWS Lambda function, but an API Gateway as well.

Deploying usually means uploading a ZIP file, which replaces the old version of the AWS Lambda function. To save you all the headaches, let's introduce the Serverless framework.

Enter the Serverless framework

The Serverless framework is an open-source, MIT-licensed solution which helps with creating and managing AWS Lambda functions easier.

Getting started with Serverless on AWS Lambda is just a few commands away:

# 1. Create a new Serverless project:
$ serverless create --template aws-nodejs --path my-service
# 2. Change into the newly created directory
$ cd my-service
# 3. Install npm dependencies
$ npm install
# 4. Deploy
$ serverless deploy

These steps will produce a serverless.yml in your project which stores the description of the service (like route mapping for functions) as well as a .serverless directory, which contains CloudFormation files and the deployed ZIP artifact.

Drawbacks of using AWS Lambda

• Vendor control / lock-in

When you start to use a serverless solution, you'll have to give up some control of your system to the cloud provider. In case of an outage, you will be affected as well most probably.

Also, when choosing either of the solutions, there will be differences on how their FaaS interface works. So porting your codebase from one provider to the other won't be possible without code changes.

• Multitenancy

Multitenancy refers to the situation, where multiple customers are running on the same host. This can cause problems both in security, robustness, and performance (like a high load customer causing another to slow down).

Benefits of using AWS Lambda

• Reduced operational cost

You can think of Serverless Computing as an outsourced infrastructure, where basically you are paying someone to manage your servers.

Since you are using a service that many other companies are using as well, the costs go down. The cost reduction manifests in two different areas, both infrastructure and people costs, as you will spend less time maintaining your machines.

• Reduced scaling cost

Given that horizontal scaling happens automatically, and you will only pay for the resources you actually use, serverless can bring a huge cost cut for you.

Imagine a scenario when you are sending out marketing/sales emails on a weekly basis, your sites' peak traffic will be present in the next hours only after they are sent.

• Easier operational management

As the auto-scaling logic of your infrastructure is handled by the vendor, you don't even have to think about horizontally scaling your application - you just have to write it in a way that it can be scaled horizontally.

Read more

I hope after reading this article, you became more curious about what Serverless and AWS Lambda can do for you. If that's the case, I recommend checking out the following extra resources:

If you have any questions, let me know in the comments below!

Node.js Post-Mortem Diagnostics & Debugging

Post-mortem diagnostics & debugging comes into the picture when you want to figure out what went wrong with your Node.js application in production.

In this chapter of Node.js at Scale we will take a look at node-report, a core project which aims to help you to do post-mortem diagnostics & debugging.

The node-report diagnostics module

The purpose of the module is to produce a human-readable diagnostics summary file. It is meant to be used in both development and production environments.

The generated report includes:

  • JavaScript and native stack traces,
  • heap statistics,
  • system information,
  • resource usage,
  • loaded libraries.

Currently node-report supports Node.js v4, v6, and v7 on AIX, Linux, MacOS, SmartOS, and Windows.

Adding it to your project just takes an npm install and require:

npm install node-report --save  

Once you add node-report to your application, it will automatically listen on unhandled exceptions and fatal error events, and will trigger a report generation. Report generation can also be triggered by sending a USR2 signal to the Node.js process.

Use cases of node-report

Diagnostics of exceptions

For the sake of simplicity, imagine you have the following endpoint in one of your applications:

function myListener(request, response) {  
  switch (request.url) {
  case '/exception':
    throw new Error('*** exception.js: uncaught exception thrown from function myListener()');

This code simply throws an exception once the /exception route handler is called. To make sure we get the diagnostics information, we have to add the node-report module to our application, as shown previously.

function my_listener(request, response) {  
  switch (request.url) {
  case '/exception':
    throw new Error('*** exception.js: uncaught exception thrown from function my_listener()');

Let's see what happens once the endpoint gets called! Our report just got written into a file:

Writing Node.js report to file: node-report.20170506.100759.20988.001.txt  
Node.js report completed  

Need assistance running Node.js in production?

Expert help when you need it the most
I want help

The header

Once you open the file, you'll get something like this:

=================== Node Report ===================

Event: exception, location: "OnUncaughtException"  
Filename: node-report.20170506.100759.20988.001.txt  
Dump event time:  2017/05/06 10:07:59  
Module load time: 2017/05/06 10:07:53  
Process ID: 20988  
Command line: node demo/exception.js

Node.js version: v6.10.0  
(ares: 1.10.1-DEV, http_parser: 2.7.0, icu: 58.2, modules: 48, openssl: 1.0.2k, 
 uv: 1.9.1, v8:, zlib: 1.2.8)

node-report version: 2.1.2 (built against Node.js v6.10.0, 64 bit)

OS version: Darwin 16.4.0 Darwin Kernel Version 16.4.0: Thu Dec 22 22:53:21 PST 2016; root:xnu-3789.41.3~3/RELEASE_X86_64

Machine: Gergelys-MacBook-Pro.local x86_64  

You can think of this part as a header for your diagnostics summary - it includes..

  • the main event why the report was created,
  • how the Node.js application was started (node demo/exception.js),
  • what Node.js version was used,
  • the host operating system,
  • and the version of node-report itself.

The stack traces

The next part of the report includes the captured stack traces, both for JavaScript and the native part:

=================== JavaScript Stack Trace ===================
Server.myListener (/Users/gergelyke/Development/risingstack/node-report/demo/exception.js:19:5)  
emitTwo (events.js:106:13)  
Server.emit (events.js:191:7)  
HTTPParser.parserOnIncoming [as onIncoming] (_http_server.js:546:12)  
HTTPParser.parserOnHeadersComplete (_http_common.js:99:23)  

In the JavaScript part, you can see..

  • the stack trace (which function called which one with line numbers),
  • and where the exception occurred.

In the native part, you can see the same thing - just on a lower level, in the native code of Node.js

=================== Native Stack Trace ===================
 0: [pc=0x103c0bd50] nodereport::OnUncaughtException(v8::Isolate*) [/Users/gergelyke/Development/risingstack/node-report/api.node]
 1: [pc=0x10057d1c2] v8::internal::Isolate::Throw(v8::internal::Object*, v8::internal::MessageLocation*) [/Users/gergelyke/.nvm/versions/node/v6.10.0/bin/node]
 2: [pc=0x100708691] v8::internal::Runtime_Throw(int, v8::internal::Object**, v8::internal::Isolate*) [/Users/gergelyke/.nvm/versions/node/v6.10.0/bin/node]
 3: [pc=0x3b67f8092a7] 
 4: [pc=0x3b67f99ab41] 
 5: [pc=0x3b67f921533] 

Heap and garbage collector metrics

You can see in the heap metrics how each heap space performed during the creation of the report:

  • new space,
  • old space,
  • code space,
  • map space,
  • large object space.

These metrics include:

  • memory size,
  • committed memory size,
  • capacity,
  • used size,
  • available size.

To better understand how memory handling in Node.js works, check out the following articles:

=================== JavaScript Heap and GC ===================
Heap space name: new_space  
    Memory size: 2,097,152 bytes, committed memory: 2,097,152 bytes
    Capacity: 1,031,680 bytes, used: 530,736 bytes, available: 500,944 bytes
Heap space name: old_space  
    Memory size: 3,100,672 bytes, committed memory: 3,100,672 bytes
    Capacity: 2,494,136 bytes, used: 2,492,728 bytes, available: 1,408 bytes

Total heap memory size: 8,425,472 bytes  
Total heap committed memory: 8,425,472 bytes  
Total used heap memory: 4,283,264 bytes  
Total available heap memory: 1,489,426,608 bytes

Heap memory limit: 1,501,560,832  

Resource usage

The resource usage section includes metrics on..

  • CPU usage,
  • the size of the resident set size,
  • information on page faults,
  • and the file system activity.
=================== Resource usage ===================
Process total resource usage:  
  User mode CPU: 0.119704 secs
  Kernel mode CPU: 0.020466 secs
  Average CPU Consumption : 2.33617%
  Maximum resident set size: 21,965,570,048 bytes
  Page faults: 13 (I/O required) 5461 (no I/O required)
  Filesystem activity: 0 reads 3 writes

System information

The system information section includes..

  • environment variables,
  • resource limits (like open files, CPU time or max memory size)
  • and loaded libraries.

Diagnostics of fatal errors

The node-report module can also help once you have a fatal error, like your application runs out of memory.

By default, you will get an error message something like this:

<--- Last few GCs --->

   23249 ms: Mark-sweep 1380.3 (1420.7) -> 1380.3 (1435.7) MB, 695.6 / 0.0 ms [allocation failure] [scavenge might not succeed].
   24227 ms: Mark-sweep 1394.8 (1435.7) -> 1394.8 (1435.7) MB, 953.4 / 0.0 ms (+ 8.3 ms in 231 steps since start of marking, biggest step 1.2 ms) [allocation failure] [scavenge might not succeed].

On its own, this information is not that helpful. You don't know the context, or what was the state of the application. With node-report, it gets better.

First of all, in the generated post-mortem diagnostics summary you will have a more descriptive event:

Event: Allocation failed - JavaScript heap out of memory, location: "MarkCompactCollector: semi-space copy, fallback in old gen"  

Secondly, you will get the native stack trace - that can help you to understand better why the allocation failed.

Diagnostics of blocking operations

Imagine you have the following loops which block your event loop. This is a performance nightmare.

var list = []  
for (let i = 0; i < 10000000000; i++) {  
  for (let j = 0; i < 1000; i++) {
    list.push(new MyRecord())
  for (let j=0; i < 1000; i++) {
    list[j].id += 1
    list[j].account += 2
  for (let j = 0; i < 1000; i++) {

With node-report you can request reports even when your process is busy, by sending the USR2 signal. Once you do that you will receive the stack trace, and you will see in a minute where your application spends time.

(Examples are taken for the node-report repository)

The API of node-report

Triggering report generation programmatically

The creation of the report can also be triggered using the JavaScript API. This way your report will be saved in a file, just like when it was triggered automatically.

const nodeReport = require('node-report')  

Getting the report as a string

Using the JavaScript API, the report can also be retrieved as a string.

const nodeReport = require('nodereport')  
const report = nodeReport.getReport()  

Using without automatic triggering

If you don't want to use automatic triggers (like the fatal error or the uncaught exception) you can opt-out of them by requiring the API itself - also, the file name can be specified as well:

const nodeReport = require('node-report/api')  


If you feel like making Node.js even better, please consider joining the Postmortem Diagnostics working group, where you can contribute to the module.

The Postmortem Diagnostics working group is dedicated to the support and improvement of postmortem debugging for Node.js. It seeks to elevate the role of postmortem debugging for Node, to assist in the development of techniques and tools, and to make techniques and tools known and available to Node.js users.

In the next chapter of the Node.js at Scale series, we will discuss profiling Node.js Applications. If you have any questions, please let me know in the comments section below.

Mastering the Node.js Core Modules - The File System & fs Module

In this article, we'll take a look at the File System core module, File Streams and some fs module alternatives.

Imagine that you just got a task, which you have to complete using Node.js..

The problem you face seems to be easy, but instead of checking the official Node.js docs, you head over to Google or npm and search for a module that can do the job for you.

While this is totally okay; sometimes the core modules could easily do the trick for you.

In this new Mastering the Node.js Core Modules series you can learn what hidden/barely known features the core modules have, and how you can use them. We will also mention modules that extend their behaviors and are great additions to your daily development flow.

The Node.js fs module

File I/O is provided by simple wrappers around standard POSIX functions. To use the fs module you have to require it with require('fs'). All the methods have asynchronous and synchronous forms.

The asynchronous API

// the async api
const fs = require('fs')

fs.unlink('/tmp/hello', (err) => {  
  if (err) {
    return console.log(err)
  console.log('successfully deleted /tmp/hello')

You should always use the asynchronous API when developing production code, as it won't block the event loop so you can build performant applications.

The synchronous API

// the sync api
const fs = require('fs')

try {  
} catch (ex) {

console.log('successfully deleted /tmp/hello');  

You should only use the synchronous API when building proof of concept applications, or small CLIs.

Node.js File Streams

One of the things we see a lot is that developers barely take advantage of file streams.

What are Node.js streams, anyways?

Streams are a first-class construct in Node.js for handling data. There are three main concepts to understand:

  • source - the object where your data comes from,
  • pipeline - where your data passes through (you can filter, or modify it here),
  • sink - where your data ends up.

For more information check Substack's Stream Handbook.

As the core fs module does not expose a feature to copy files, you can easily do it with streams:

// copy a file
const fs = require('fs')  
const readableStream = fs.createReadStream('original.txt')  
var writableStream = fs.createWriteStream('copy.txt')


You could ask - why should I do it when it is just a cp command away?

The biggest advantage in this case to use streams is the ability to transform the files - you could easily do something like this to decompress a file:

const fs = require('fs')  
const zlib = require('zlib')


Expert Node.js Support

Need help with core modules and modules from npm?
Learn more

When not to use fs.access

The purpose of the fs.access method is to check if a user have permissions for the given file or path, something like this:

fs.access('/etc/passwd', fs.constants.R_OK | fs.constants.W_OK, (err) => {  
  if (err) {
    return console.error('no access')
  console.log('access for read/write')

Constants exposed for permission checking:

  • fs.constants.F_OK - to check if the path is visible to the calling process,
  • fs.constants.R_OK - to check if the path can be read by the process,
  • fs.constants.W_OK - to check if the path can be written by the process,
  • fs.constants.X_OK - to check if the path can be executed by the process.

However, please note that using fs.access to check for the accessibility of a file before calling, fs.readFile or fs.writeFile is not recommended.

The reason is simple - if you do so, you will introduce a race condition. Between you check and the actual file operation, another process may have already changed that file.

Instead, you should open the file directly, and handle error cases there.

Caveats about

With the method, you can listen on changes of a file or a directory.

However, the API is not 100% consistent across platforms, and on some systems, it is not available at all:

Note, that the recursive option is only supported on OS X and Windows, but not on Linux.

Also, the fileName argument in the watch callback is not always provided (as it is only supported on Linux and Windows), so you should prepare for fallbacks if it is undefined:'some/path', (eventType, fileName) => {  
  if (!filename) {
    //filename is missing, handle it gracefully

Useful fs modules from npm

There are some very useful modules maintained by the community which extends the functionality of the fs module.


The graceful-fs is a drop-in replacement for the core fs module, with some improvements:

  • queues up open and readdir calls, and retries them once something closes if there is an EMFILE error from too many file descriptors,
  • ignores EINVAL and EPERM errors in chown, fchown or lchown if the user isn't root,
  • makes lchmod and lchown become noops, if not available,
  • retries reading a file if read results in EAGAIN error.

You can start using it just like the core fs module, or alternatively by patching the global module.

// use as a standalone module
const fs = require('graceful-fs')

// patching the global one
const originalFs = require('fs')  
const gracefulFs = require('graceful-fs')  


The mock-fs module allows Node's built-in fs module to be backed temporarily by an in-memory, mock file system. This lets you run tests against a set of mock files or directories.

Start using the module is as easy as:

const mock = require('mock-fs')  
const fs = require('fs')

  'path/to/fake/dir': {
    'some-file.txt': 'file content here',
    'empty-dir': {}
  'path/to/some.png': new Buffer([8, 6, 7, 5, 3, 0, 9])

fs.exists('path/to/fake/dir', function (exists) {  
  // will output true


File locking is a way to restrict access to a file by allowing only one process access at any specific time. This can prevent race condition scenarios.

Adding lockfiles using the lockfile module is striaghforward:

const lockFile = require('lockfile')

lockFile.lock('some-file.lock', function (err) {  
  // if the err happens, then it failed to acquire a lock.
  // if there was not an error, then the file was created,
  // and won't be deleted until we unlock it.

  // then, some time later, do:
  lockFile.unlock('some-file.lock', function (err) {



I hope this was a useful explanation of the Node.js file system and its' possibilities.

If you have any questions about the topic, please let me know in the comments section below.

How to Debug Node.js with the Best Tools Available

Debugging - the process of finding and fixing defects in software - can be a challenging task to do in all languages. Node.js is no exception.

Luckily, the tooling for finding these issues improved a lot in the past period. Let's take a look at what options you have to find and fix bugs in your Node.js applications!

We will dive into two different aspects of debugging Node.js applications - the first one will be logging, so you can keep an eye on production systems, and have events from there. After logging, we will take a look at how you can debug your applications in development environments.

Logging in Node.js

Logging takes place in the execution of your application to provide an audit trail that can be used to understand the activity of the system and to diagnose problems to find and fix bugs.

For logging purposes, you have lots of options when building Node.js applications. Some npm modules are shipped with built in logging that can be turned on when needed using the debug module. For your own applications, you have to pick a logger too! We will take a look at pino.

Before jumping into logging libraries, let's take a look what requirements they have to fulfil:

  • timestamps - it is crucial to know which event happened when,
  • formatting - log lines must be easily understandable by humans, and straightforward to parse for applications,
  • log destination - it should be always the standard output/error, applications should not concern themselves with log routing,
  • log levels - log events have different severity levels, in most cases, you won't be interested in debug or info level events.

The debug module of Node.js

Recommendation: use for modules published on npm

Let's see how it makes your life easier! Imagine that you have a Node.js module that sends serves requests, as well as send out some.

// index.js
const debugHttpIncoming = require('debug')('http:incoming')  
const debugHttpOutgoing = require('debug')('http:outgoing')

let outgoingRequest = {  
  url: ''

// sending some request
debugHttpOutgoing('sending request to %s', outgoingRequest.url)

let incomingRequest = {  
  body: '{"status": "ok"}'

// serving some request
debugHttpOutgoing('got JSON body %s', incomingRequest.body)  

Once you have it, start your application this way:

DEBUG=http:incoming,http:outgoing node index.js  

The output will be something like this:

Output of Node.js Debugging

Also, the debug module supports wildcards with the * character. To get the same result we got previously, we simply could start our application with DEBUG=http:* node index.js.

What's really nice about the debug module is that a lot of modules (like Express or Koa) on npm are shipped with it - as of the time of writing this article more than 14.000 modules.

The pino logger module

Recommendation: use for your applications when performance is key

Pino is an extremely fast Node.js logger, inspired by bunyan. In many cases, pino is over 6x faster than alternatives like bunyan or winston:

benchWinston*10000:     2226.117ms  
benchBunyan*10000:      1355.229ms  
benchDebug*10000:       445.291ms  
benchLogLevel*10000:    322.181ms  
benchBole*10000:        291.727ms  
benchPino*10000:        269.109ms  
benchPinoExtreme*10000: 102.239ms  

Getting started with pino is straightforward:

const pino = require('pino')()'hello pino')'the answer is %d', 42)  
pino.error(new Error('an error'))  

The above snippet produces the following log lines:

{"pid":28325,"hostname":"Gergelys-MacBook-Pro.local","level":30,"time":1492858757722,"msg":"hello pino","v":1}
{"pid":28325,"hostname":"Gergelys-MacBook-Pro.local","level":30,"time":1492858757724,"msg":"the answer is 42","v":1}
{"pid":28325,"hostname":"Gergelys-MacBook-Pro.local","level":50,"time":1492858757725,"msg":"an error","type":"Error","stack":"Error: an error\n    at Object.<anonymous> (/Users/gergelyke/Development/risingstack/node-js-at-scale-debugging/pino.js:5:12)\n    at Module._compile (module.js:570:32)\n    at Object.Module._extensions..js (module.js:579:10)\n    at Module.load (module.js:487:32)\n    at tryModuleLoad (module.js:446:12)\n    at Function.Module._load (module.js:438:3)\n    at Module.runMain (module.js:604:10)\n    at run (bootstrap_node.js:394:7)\n    at startup (bootstrap_node.js:149:9)\n    at bootstrap_node.js:509:3","v":1}

The Built-in Node.js Debugger module

Node.js ships with an out-of-process debugging utility, accessible via a TCP-based protocol and built-in debugging client. You can start it using the following command:

$ node debug index.js

This debugging agent is a not a fully featured debugging agent - you won't have a fancy user interface, however, simple inspections are possible.

You can add breakpoints to your code by adding the debugger statement into your codebase:

const express = require('express')  
const app = express()

app.get('/', (req, res) => {  

This way the execution of your script will be paused at that line, then you can start using the commands exposed by the debugging agent:

  • cont or c - continue execution,
  • next or n - step next,
  • step or s - step in,
  • out or o - step out,
  • repl - to evaluate script's context.

V8 Inspector Integration for Node.js

The V8 inspector integration allows attaching Chrome DevTools to Node.js instances for debugging by using the Chrome Debugging Protocol.

V8 Inspector can be enabled by passing the --inspect flag when starting a Node.js application:

$ node --inspect index.js

In most cases, it makes sense to stop the execution of the application at the very first line of your codebase and continue the execution from that. This way you won't miss any command execution.

$ node --inspect-brk index.js

I recommend watching this video in full-screen mode to get every detail!

How to Debug Node.js with Visual Studio Code

Most modern IDEs have some support for debugging applications - so does VS Code. It has built-in debugging support for Node.js.

What you can see below, is the debugging interface of VS Code - with the context variables, watched expressions, call stack and breakpoints.

VS Code Debugging Layout Image credit: Visual Studio Code

One of the most valuable features of the integrated Visual Studio Code debugger is the ability to add conditional breakpoints. With conditional breakpoints, the breakpoint will be hit whenever the expression evaluates to true.

If you need more advanced settings for VS Code, it comes with a configuration file, .vscode/launch.json which describes how the debugger should be launched. The default launch.json looks something like this:

    "version": "0.2.0",
    "configurations": [
            "type": "node",
            "request": "launch",
            "name": "Launch Program",
            "program": "${workspaceRoot}/index.js"
            "type": "node",
            "request": "attach",
            "name": "Attach to Port",
            "address": "localhost",
            "port": 5858

For advanced configuration settings of launch.json go to

For more information on debugging with Visual Studio Code, visit the official site:

Next Up

If you have any questions about debugging, please let me know in the comments section.

In the next episode of the Node.js at Scale series, we are going to talk about Node.js Post-Mortem Diagnostics & Debugging.

Announcing Free Node.js Monitoring & Debugging with Trace

Today, we’re excited to announce that Trace, our Node.js monitoring & debugging tool is now free for open-source projects.

What is Trace?

We launched Trace a year ago with the intention of helping developers looking for a Node.js specific APM which is easy to use and helps with the most difficult aspects of building Node projects, like..

  • finding memory leaks in a production environment
  • profiling CPU usage to find bottlenecks
  • tracing distributed call-chains
  • avoiding security leaks & bad npm packages

.. and so on.

Node.js Monitoring with Trace by RisingStack - Performance Metrics chart

Why are we giving it away for free?

We use a ton of open-source technology every day, and we are also the maintainers of some.

We know from experience that developing an open-source project is hard work, which requires a lot of knowledge and persistence.

Trace will save a lot of time for those who use Node for their open-source projects.

How to get started with Trace?

  1. Visit and sign up - it's free.
  2. Connect your app with Trace.
  3. Head over to this form and tell us a little bit about your project.

Done. Your open-source project will be monitored for free as a result.

If you need help with Node.js Monitoring & Debugging..

Just drop us a tweet at @RisingStack if you have any additional questions about the tool or the process.

If you'd like to read a little bit more about the topic, I recommend to read our previous article The Definitive Guide for Monitoring Node.js Applications.

One more thing

At the same time of making Trace available for open-source projects, we're announcing our new line of business at RisingStack:

Commercial Node.js support, aimed at enterprises with Node.js applications running in a production environment.

RisingStack now helps to bootstrap and operate Node.js apps - no matter what life cycle they are in.

Disclaimer: We retain the exclusive right to accept or deny your application to use Trace by RisingStack for free.

Mastering the Node.js CLI & Command Line Options

Node.js comes with a lot of CLI options to expose built-in debugging & to modify how V8, the JavaScript engine works.

In this post, we have collected the most important CLI commands to help you become more productive.

Accessing Node.js CLI Options

To get a full list of all available Node.js CLI options in your current distribution of Node.js, you can access the manual page from the terminal using:

$ man node

Usage: node [options] [ -e script | script.js ] [arguments]  
       node debug script.js [arguments] 

  -v, --version         print Node.js version
  -e, --eval script     evaluate script
  -p, --print           evaluate script and print result
  -c, --check           syntax check script without executing

As you can see in the first usage section, you have to provide the optional options before the script you want to run.

Take the following file:

console.log(new Buffer(100))  

To take advantage of the --zero-fill-buffers option, you have to run your application using:

$ node --zero-fill-buffers index.js

This way the application will produce the correct output, instead of random memory garbage:

<Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... >  

CLI Options

Now as we saw how you instruct Node.js to use CLI options, let's see what other options are there!

--version or -v

Using the node --version, or short, node -v, you can print the version of Node.js you are using.

$ node -v

--eval or -e

Using the --eval option, you can run JavaScript code right from your terminal. The modules which are predefined in REPL can also be used without requiring them, like the http or the fs module.

$ node -e 'console.log(3 + 2)'

--print or -p

The --print option works the same way as the --eval, however it prints the result of the expression. To achieve the same output as the previous example, we can simply leave the console.log:

$ node -p '3 + 2'

--check or -c

Available since v4.2.0

The --check option instructs Node.js to check the syntax of the provided file, without actually executing it.

Take the following example again:

console.log(new Buffer(100)  

As you can see, a closing ) is missing. Once you run this file using node index.js, it will produce the following output:

(function (exports, require, module, __filename, __dirname) { console.log(new Buffer(100)
SyntaxError: missing ) after argument list  
    at Object.exports.runInThisContext (vm.js:76:16)
    at Module._compile (module.js:542:28)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.runMain (module.js:604:10)
    at run (bootstrap_node.js:394:7)

Using the --check option you can check for the same issue, without executing the script, using node --check index.js. The output will be similar, except you won't get the stack trace, as the script never ran:

(function (exports, require, module, __filename, __dirname) { console.log(new Buffer(100)
SyntaxError: missing ) after argument list  
    at startup (bootstrap_node.js:144:11)
    at bootstrap_node.js:509:3

The --check option can come handy, when you want to see if your script is syntactically correct, without executing it.

Expert help when you need it the most

Commercial Node.js Support by RisingStack
Learn more


Available since v6.3.0

Using node --inspect will activate the inspector on the provided host and port. If they are not provided, the default is The debugging tools attached to Node.js instances communicate via a tcp port using the Chrome Debugging Protocol.


Available since v7.6.0

The --inspect-brk has the same functionality as the --inspect option, however it pauses the execution at the first line of the user script.

$ node --inspect-brk index.js 
Debugger listening on port 9229.  
Warning: This is an experimental feature and could change at any time.  
To start debugging, open the following URL in Chrome:  

Once you have ran this command, just copy and paste the URL you got to start debugging your Node.js process.


Available since v6.0.0

Node.js can be started using the --zero-fill-buffers command line option to force all newly allocated Buffer instances to be automatically zero-filled upon creation. The reason to do so is that newly allocated Buffer instances can contain sensitive data.

It should be used when it is necessary to enforce that newly created Buffer instances cannot contain sensitive data, as it has significant impact on performance.

Also note, that some Buffer constructors got deprecated in v6.0.0:

  • new Buffer(array)
  • new Buffer(arrayBuffer[, byteOffset [, length]])
  • new Buffer(buffer)
  • new Buffer(size)
  • new Buffer(string[, encoding])

Instead, you should use Buffer.alloc(size[, fill[, encoding]]), Buffer.from(array), Buffer.from(buffer), Buffer.from(arrayBuffer[, byteOffset[, length]]) and Buffer.from(string[, encoding]).

You can read more on the security implications of the Buffer module on the Synk blog.


Using the --prof-process, the Node.js process will output the v8 profiler output.

To use it, first you have to run your applications using:

node --prof index.js  

Once you ran it, a new file will be placed in your working directory, with the isolate- prefix.

Then, you have to run the Node.js process with the --prof-process option:

node --prof-process isolate-0x102001600-v8.log > output.txt  

This file will contain metrics from the V8 profiler, like how much time was spent in the C++ layer, or in the JavaScript part, and which function calls took how much time. Something like this:

   ticks  total  nonlib   name
     16   18.4%   18.4%  node::ContextifyScript::New(v8::FunctionCallbackInfo<v8::Value> const&)
      4    4.6%    4.6%  ___mkdir_extended
      2    2.3%    2.3%  void v8::internal::String::WriteToFlat<unsigned short>(v8::internal::String*, unsigned short*, int, int)
      2    2.3%    2.3%  void v8::internal::ScavengingVisitor<(v8::internal::MarksHandling)1, (v8::internal::LoggingAndProfiling)0>::ObjectEvacuationStrategy<(v8::internal::ScavengingVisitor<(v8::internal::MarksHandling)1, (v8::internal::LoggingAndProfiling)0>::ObjectContents)1>::VisitSpecialized<24>(v8::internal::Map*, v8::internal::HeapObject**, v8::internal::HeapObject*)

   ticks  total  nonlib   name
      1    1.1%    1.1%  JavaScript
     70   80.5%   80.5%  C++
      5    5.7%    5.7%  GC
      0    0.0%          Shared libraries
     16   18.4%          Unaccounted

To get a full list of Node.js CLI options, check out the official documentation here.

V8 Options

You can print all the available V8 options using the --v8-options command line option.

Currently V8 exposes more than a 100 command line options - here we just picked a few to showcase some of the functionality they can provide. Some of these options can drastically change how V8 behaves, use them with caution!


With the harmony flag, you can enable all completed harmony features.


With this option, you can set the maximum size of the old space on the heap, which directly affects how much memory your process can allocate.

This setting can come handy when you run in low memory environments.


With this option, you can instruct V8 to optimize the memory space for size - even if the application gets slower.

Just as the previous option, it can be useful in low memory environments.

Environment Variables


Setting this environment variable enables the core modules to print debug information. You can run the previous example like this to get debug information on the module core component (instead of module, you can go for http, fs, etc...):

$ NODE_DEBUG=module node index.js

The output will be something like this:

MODULE 7595: looking for "/Users/gergelyke/Development/risingstack/mastering-nodejs-cli/index.js" in ["/Users/gergelyke/.node_modules","/Users/gergelyke/.node_libraries","/Users/gergelyke/.nvm/versions/node/v6.10.0/lib/node"]  
MODULE 7595: load "/Users/gergelyke/Development/risingstack/mastering-nodejs-cli/index.js" for module "."  


Using this setting, you can add extra paths for the Node.js process to search modules in.


Using this environment variable, you can load an OpenSSL configuration file on startup.

For a full list of supported environment variables, check out the official Node.js docs.

Let's Contribute to the CLI related Node Core Issues!

As you can see, the CLI is a really useful tool which becomes better with each Node version!

If you'd like to contribute to its advancement, you can help by checking out the currently open issues at !

Node.js Command Line Interface CLI Issues

Node.js War Stories: Debugging Issues in Production

In this article, you can read stories from Netflix, RisingStack & nearForm about Node.js issues in production - so you can learn from our mistakes and avoid repeating them. You'll also learn what methods we used to debug these Node.js issues.

Special shoutout to Yunong Xiao of Netflix, Matteo Collina of nearForm & Shubhra Kar from Strongloop for helping us with their insights for this post!

At RisingStack, we have accumulated a tremendous experience of running Node apps in production in the past 4 years - thanks to our Node.js consulting, training and development business.

As well as the Node teams at Netflix & nearForm we picked up the habit of always writing thorough postmortems, so the whole team (and now the whole world) could learn from the mistakes we made.

Netflix & Debugging Node: Know your Dependencies

Let's start with a slowdown story from Yunong Xiao, which happened with our friends at Netflix.

The trouble started with the Netflix team noticing that their applications response time increased progressively - some of their endpoints' latency increased with 10ms every hour.

This was also reflected in the growing CPU usage.

Netflix debugging Nodejs in production with the Request latency graph Request latencies for each region over time - photo credit: Netflix

At first, they started to investigate whether the request handler is responsible for slowing things down.

After testing it in isolation, it turned out that the request handler had a constant response time around 1ms.

So the problem was not that, and they started to suspect that probably it's deeper in the stack.

The next thing Yunong & the Netflix team tried are CPU flame graphs and Linux Perf Events.

Flame graph of Netflix Nodejs slowdown Flame graph or the Netflix slowdown - photo credit: Netflix

What you can see in the flame graph above is that

  • it has high stacks (which means lot of function calls)
  • and the boxes are wide (meaning we are spending quite some time in those functions).

After further inspection, the team found that Express's router.handle and has lots of references.

The Express.js source code reveals a couple of interesting tidbits:

  • Route handlers for all endpoints are stored in one global array.
  • Express.js recursively iterates through and invokes all handlers until it finds the right route handler.

Before revealing the solution of this mystery, we have to get one more detail:

Netflix's codebase contained a periodical code that ran every 6 minutes and grabbed new route configs from an external resource and updated the application's route handlers to reflect the changes.

This was done by deleting old handlers and adding new ones. Accidentally, it also added the same static handler all over again - even before the API route handlers. As it turned out, this caused the extra 10ms response time hourly.

Takeaways from Netflix's Issue

  • Always know your dependencies - first, you have to fully understand them before going into production with them.
  • Observability is key - flame graphs helped the Netflix engineering team to get to the bottom of the issue.

Read the full story here: Node.js in Flames.

Expert help when you need it the most

Commercial Node.js Support by RisingStack
Learn more

RisingStack CTO: "Crypto takes time"

You may have already heard to story of how we broke down the monolithic infrastructure of Trace (our Node.js monitoring solution) into microservices from our CTO, Peter Marton.

The issue we'll talk about now is a slowdown which affected Trace in production:

As the very first versions of Trace ran on a PaaS, it used the public cloud to communicate with other services of ours.

To ensure the integrity of our requests, we decided to sign all of them. To do so, we went with Joyent's HTTP signing library. What's really great about it, is that the request module supports HTTP signature out of the box.

This solution was not only expensive, but it also had a bad impact on our response times.

network delay in nodejs request visualized by trace The network delay built up our response times - photo: Trace

As you can see on the graph above, the given endpoint had a response time of 180ms, however from that amount, 100ms was just the network delay between the two services alone.

As the first step, we migrated from the PaaS provider to use Kubernetes. We expected that our response times would be a lot better, as we can leverage internal networking.

We were right - our latency improved.

However, we expected better results - and a lot bigger drop in our CPU usage. The next step was to do CPU profiling, just like the guys at Netflix:

crypto sign function taking up cpu time

As you can see on the screenshot, the crypto.sign function takes up most of the CPU time, by consuming 10ms on each request. To solve this, you have two options:

  • if you are running in a trusted environment, you can drop request signing,
  • if you are in an untrusted environment, you can scale up your machines to have stronger CPUs.

Takeaways from Peter Marton

  • Latency in-between your services has a huge impact on user experience - whenever you can, leverage internal networking.
  • Crypto can take a LOT of time.

nearForm: Don't block the Node.js Event Loop

React is more popular than ever. Developers use it for both the frontend and the backend, or they even take a step further and use it to build isomorphic JavaScript applications.

However, rendering React pages can put some heavy load on the CPU, as rendering complex React components is CPU bound.

When your Node.js process is rendering, it blocks the event loop because of its synchronous nature.

As a result, the server can become entirely unresponsive - requests accumulate, which all puts load on the CPU.

What can be even worse is that even those requests will be served which no longer have a client - still putting load on the Node.js application, as Matteo Collina of nearForm explains.

It is not just React, but string operations in general. If you are building JSON REST APIs, you should always pay attention to JSON.parse and JSON.stringify.

As Shubhra Kar from Strongloop (now Joyent) explained, parsing and stringifying huge payloads can take a lot of time as well (and blocking the event loop in the meantime).

function requestHandler(req, res) {  
  const body = req.rawBody
  let parsedBody
  try {
    parsedBody = JSON.parse(body)
  catch(e) {
     res.end(new Error('Error parsing the body'))
  res.end('Record successfully received')

Simple request handler

The example above shows a simple request handler, which just parses the body. For small payloads, it works like a charm - however, if the JSON's size can be measured in megabytes, the execution time can be seconds instead of milliseconds. The same applies for JSON.stringify.

To mitigate these issues, first, you have to know about them. For that, you can use Matteo's loopbench module, or Trace's event loop metrics feature.

With loopbench, you can return a status code of 503 to the load balancer, if the request cannot be fulfilled. To enable this feature, you have to use the instance.overLimit option. This way ELB or NGINX can retry it on a different backend, and the request may be served.

Once you know about the issue and understand it, you can start working on fixing it - you can do it either by leveraging Node.js streams or by tweaking the architecture you are using.

Takeaways from nearForm

  • Always pay attention to CPU bound operations - the more you have, to more pressure you put on your event loop.
  • String operations are CPU-heavy operations

Debugging Node.js Issues in Production

I hope these examples from Netflix, RisingStack & nearForm will help you to debug your Node.js apps in Production.

If you'd like to learn more, I recommend checking out these recent posts which will help you to deepen your Node knowledge:

If you have any questions, please let us know in the comments!

The Definitive Guide for Monitoring Node.js Applications

In the previous chapters of Node.js at Scale we learned how you can get Node.js testing and TDD right, and how you can use Nightwatch.js for end-to-end testing.

In this article, we will learn about running and monitoring Node.js applications in Production. Let's discuss these topics:

  • What is monitoring?
  • What should be monitored?
  • Open-source monitoring solutions
  • SaaS and On-premise monitoring offerings

What is Node.js Monitoring?

Monitoring is observing the quality of a software over time. The available products and tools we have in this industry usually go by the term Application Performance Monitoring or APM in short.

If you have a Node.js application in a staging or production environment, you can (and should) do monitoring on different levels:

You can monitor

  • regions,
  • zones,
  • individual servers and,
  • of course, the Node.js software that runs on them.

In this guide we will deal with the software components only, as if you run in a cloud environment, the others are taken care for you usually.

What should be monitored?

Each application you write in Node.js produces a lot of data about its' behavior.

There are different layers where an APM tool should collect data from. The more of them covered, the more insights you'll get about your system's behavior.

  • Service level
  • Host level
  • Instance (or process) level

The list you can find below collects the most crucial problems you'll run into while you maintain a Node.js application in production. We'll also discuss how monitoring helps to solve them and what kind of data you'll need to do so.

Problem 1.: Service Downtimes

If your application is unavailable, your customers can't spend money on your sites. If your API's are down, your business partners and services depending on them will fail as well because of you.

We all know how cringeworthy it is to apologize for service downtimes.

Your topmost priority should be preventing failures and providing 100% availability for your application.

Running a production app comes with great responsibility.

Node.js APM's can easily help you detecting and preventing downtimes, since they usually collect service level metrics.

This data can show if your application handles requests properly, although it won't always help to tell if your public sites or API's are available.

To have a proper coverage on downtimes, we recommend to set up a pinger as well which can emulate user behavior and provide foolproof data on availability. If you want to cover everything, don't forget to include different regions like the US, Europe and Asia too.

Problem 2.: Slow Services, Terrible Response Times

Slow response times have a huge impact on conversion rate, as well as on product usage. The faster your product is the more customers and user satisfaction you'll have.

Usually, all Node.js APM's can show if your services are slowing down, but interpreting that data requires further analysis.

I recommend doing two things to find the real reasons for slowing services.

  • Collect data on a process level too. Check out each instance of a service to figure out what happens under the hood.
  • Request CPU profiles when your services slow down and analyze them to find the faulty functions.

Eliminating performance bottlenecks enables you to scale your software more efficiently and also to optimize your budget.

Problem 3.: Solving Memory Leaks is Hard

Our Node.js Consulting & Development expertise allowed us to build huge enterprise systems and help developers making them better.

What we see constantly is that Memory Leaks in Node.js applications are quite frequent and that finding out what causes them is among the greatest struggles Node developers face.

This impression is backed with data as well. Our Node.js Developer Survey showed that Memory Leaks cause a lot of headache for even the best engineers.

To find memory leaks, you have to know exactly when they happen.

Some APM's collect memory usage data which can be used to recognize a leak. What you should look for is the steady growth of memory usage which ends up in a service crash & restart (as Node runs out of memory after 1,4 Gigabytes).

Node.js memory leak shown in Trace, the node.js monitoring tool

If your APM collects data on the Garbage Collector as well, you can look for the same pattern. As extra objects in a Node app's memory pile up, the time spent with Garbage Collection increases simultaneously. This is a great indicator of the Memory Leak.

After figuring out that you have a leak, request a memory heapdump and look for the extra objects!

This sounds easy in theory but can be challenging in practice.

What you can do is request 2 heapdumps from your production system with a Monitoring tool, and analyze these dumps with Chrome's DevTools. If you look for the extra objects in comparison mode, you'll end up seeing what piles up in your app's memory.

If you'd like a more detailed rundown on these steps, I wrote one article about finding a Node.js memory leak in Ghost, where I go into more details.

Problem 4.: Depending on Code Written by Anonymus

Most of the Node.js applications heavily rely on npm. We can end up with a lot of dependencies written by developers of unknown expertise and intentions.

Roughly 76% of Node shops use vulnerable packages, while open source projects regularly grow stale, neglecting to fix security flaws.

There are a couple of possible steps to lower the security risks of using npm packages.

  1. Audit your modules with the Node Security Platform CLI
  2. Look for unused dependencies with the depcheck tool
  3. Use the npm stats API, or browse historic stats on to find out if others using a package
  4. Use the npm view <pkg> maintainers command to avoid packages maintained by only a few
  5. Use the npm outdated command or Greenkeeper to learn whether you're using the latest version of a package.

Going through these steps can consume a lot of your time, so picking a Node.js Monitoring Tool which can warn you about insecure dependencies is highly recommended.

Problem 6.: Email Alerts often go Unnoticed

Let's be honest. We are developers who like spending time writing code - not going through our email account every 10 minutes..

According to my experience, email alerts are usually unread and it's very easy to miss out on a major outage or problem if we depend only on them.

Email is a subpar method to learn about issues in production.

I guess that you also don't want to watch dashboards for potential issues 24/7. This is why it is important to look for an APM with great alerting capabilities.

What I recommend is to use pager systems like opsgenie or pagerduty to learn about critical issues. Pair up the monitoring solution of your choice with one of these systems if you'd like to know about your alerts instantly.

A few alerting best-practices we follow at RisingStack:

  • Always keep alerting simple and alert on symptoms
  • Aim to have as few alerts as possible - associated with end-user pain
  • Alert on high response time and error rates as high up in the stack as possible

Problem 7.: Finding Crucial Errors in the Code

If a feature is broken on your site, it can prevent customers from achieving their goals. Sometimes it can be a sign of bad code quality. Make sure you have proper test coverage for your codebase and a good QA process (preferably automated).

If you use an APM that collects errors from your app then you'll be able to find the ones which occur more often.

The more data your APM is accessing, better the chances of finding and fixing critical issues. We recommend to use a monitoring tool which collects and visualises stack traces as well - so you'll be able to find the root causes of errors in a distributed system.

In the next part of the article, I will show you one open-source, and one SaaS / on-premises Node.js monitoring solution that will help you operate your applications.

Prometheus - an Open-Source, General Purpose Monitoring Platform

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.

Prometheus was started in 2012, and since then, many companies and organizations have adopted the tool. It is a standalone open source project and maintained independently of any company.

In 2016, Prometheus joined the Cloud Native Computing Foundation, right after Kubernetes.

The most important features of Prometheus are:

  • a multi-dimensional data model (time series identified by metric name and key/value pairs),
  • a flexible query language to leverage this dimensionality,
  • time series collection happens via a pull model over HTTP by default,
  • pushing time series is supported via an intermediary gateway.

Node.js monitoring with prometheus

As you could see from the previous features, Prometheus is a general purpose monitoring solution, so you can use it with any language or technology you prefer.

Check out the official Prometheus getting started pages if you'd like to give it a try.

Before you start monitoring your Node.js services, you need to add instrumentation to them via one of the Prometheus client libraries.

For this, there is a Node.js client module, which you can find here. It supports histograms, summaries, gauges and counters.

Essentially, all you have to do is require the Prometheus client, then expose its output at an endpoint:

const Prometheus = require('prom-client')  
const server = require('express')()

server.get('/metrics', (req, res) => {  

server.listen(process.env.PORT || 3000)  

This endpoint will produce an output, that Prometheus can consume - something like this:

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1490433285  
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 33046528  
# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0.000089751  
# HELP nodejs_active_handles_total Number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 4  
# HELP nodejs_active_requests_total Number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 0  
# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v4.4.2",major="4",minor="4",patch="2"} 1  

Of course, these are just the default metrics which were collected by the module we have used - you can extend it with yours. In the example below we collect the number of requests served:

const Prometheus = require('prom-client')  
const server = require('express')()

const PrometheusMetrics = {  
  requestCounter: new Prometheus.Counter('throughput', 'The number of requests served')

server.use((req, res, next) => {

server.get('/metrics', (req, res) => {  


Once you run it, the /metrics endpoint will include the throughput metrics as well:

# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1490433805  
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 25120768  
# HELP nodejs_eventloop_lag_seconds Lag of event loop in seconds.
# TYPE nodejs_eventloop_lag_seconds gauge
nodejs_eventloop_lag_seconds 0.144927586  
# HELP nodejs_active_handles_total Number of active handles.
# TYPE nodejs_active_handles_total gauge
nodejs_active_handles_total 0  
# HELP nodejs_active_requests_total Number of active requests.
# TYPE nodejs_active_requests_total gauge
nodejs_active_requests_total 0  
# HELP nodejs_version_info Node.js version info.
# TYPE nodejs_version_info gauge
nodejs_version_info{version="v4.4.2",major="4",minor="4",patch="2"} 1  
# HELP throughput The number of requests served
# TYPE throughput counter
throughput 5  

Once you have exposed all the metrics you have, you can start querying and visualizing them - for that, please refer to the official Prometheus query documentation and the vizualization documentation.

As you can imagine, instrumenting your codebase can take quite some time - since you have to create your dashboard and alerts to make sense of the data. While sometimes these solutions can provide greater flexibility for your use-case than hosted solutions, it can take months to implement them & then you have to deal with operating them as well.

If you have the time to dig deep into the topic, you'll be fine with it.

Meet Trace - our SaaS, and On-premises Node.js Monitoring Tool

As we just discussed, running your own solution requires domain knowledge, as well as expertise on how to do proper monitoring. You have to figure out what aggregation to use for what kind of metrics, and so on..

This is why it can make a lot of sense to go with a hosted monitoring solution - whether it is a SaaS product or an on-premises offering.

At RisingStack, we are developing our own Node.js Monitoring Solution, called Trace. We built all the experience into Trace which we gained through the years of providing professional Node services.

What's nice about Trace, is that you have all the metrics you need with only adding a single line of code to your application - so it really takes only a few seconds to get started.

require([email protected]/trace')  

After this, the Trace collector automatically gathers your application's performance data and visualizes it for you in an easy to understand way.

Just a few things Trace is capable to do with your production Node app:

  1. Send alerts about Downtimes, Slow services & Bad Status Codes.
  2. Ping your websites and API's with an external service + show APDEX metrics.
  3. Collect data on service, host and instance levels as well.
  4. Automatically create a (10 second-long) CPU profile in a production environment in case of a slowdown.
  5. Collect data on memory consumption and garbage collection.
  6. Create memory heapdumps automatically in case of a Memory Leak in production.
  7. Show errors and stack traces from your application.
  8. Visualize whole transaction call-chains in a distributed system.
  9. Show how your services communicate with each other on a live map.
  10. Automatically detect npm packages with security vulnerabilities.
  11. Mark new deployments and measure their effectiveness.
  12. Integrate with Slack, Pagerduty, and Opsgenie - so you'll never miss an alert.

Although Trace is currently a SaaS solution, we'll make an on-premises version available as well soon.

It will be able to do exactly the same as the cloud version, but it will run on Amazon VPC or in your own datacenter. If you're interested in it, let's talk!


I hope that in this chapter of Node.js at Scale I was able to give useful advice about monitoring your Node.js application. In the next article, you will learn how to debug Node.js applications in an easy way.

Node.js End-to-End Testing with Nightwatch.js

In this article, we are going to take a look at how you can do end-to-end testing with Node.js, using Nightwatch.js, a Node.js powered end-to-end testing framework.

In the previous chapter of Node.js at Scale, we discussed Node.js Testing and Getting TDD Right. If you did not read that article, or if you are unfamiliar with unit testing and TDD (test-driven development), I recommend checking that out before continuing with this article.

What is Node.js end-to-end testing?

Before jumping into example codes and learning to implement end-to-end testing for a Node.js project, it's worth exploring what end-to-end tests really are.

First of all, end-to-end testing is part of the black-box testing toolbox. This means that as a test writer, you are examining functionality without any knowledge of internal implementation. So without seeing any source code.

Secondly, end-to-end testing can also be used as user acceptance testing, or UAT. UAT is the process of verifying that the solution actually works for the user. This process is not focusing on finding small typos, but issues that can crash the system, or make it dysfunctional for the user.

Enter Nightwatch.js

Nightwatch.js enables you to "write end-to-end tests in Node.js quickly and effortlessly that run against a Selenium/WebDriver server".

Nightwatch is shipped with the following features:

  • a built-in test runner,
  • can control the selenium server,
  • support for hosted selenium providers, like BrowserStack or SauceLabs,
  • CSS and Xpath selectors.

Installing Nightwatch

To run Nightwatch locally, we have to do a little bit of extra work - we will need a standalone Selenium server locally, as well as a webdriver, so we can use Chrome/Firefox to test our applications locally.

With these three tools, we are going to implement the flow this diagram shows below.

node.js end-to-end testing with nightwatch.js flowchart Photo credit:

STEP 1: Add Nightwatch

You can add Nightwatch to your project simply by running npm install nightwatch --save-dev.

This places the Nightwatch executable in your ./node_modules/.bin folder, so you don't have to install it globally.

STEP 2: Download Selenium

Selenium is a suite of tools to automate web browsers across many platforms.

Prerequisite: make sure you have JDK installed, with at least version 7. If you don't have it, you can grab it from here.

The Selenium server is a Java application which is used by Nightwatch to connect to various browsers. You can download the binary from here.

Once you have downloaded the JAR file, create a bin folder inside your project, and place it there. We will set up Nightwatch to use it, so you don't have to manually start the Selenium server.

STEP 3: Download Chromedriver

ChromeDriver is a standalone server which implements the W3C WebDriver wire protocol for Chromium.

To grab the executable, head over to the downloads section, and place it to the same bin folder.

STEP 4: Configuring Nightwatch.js

The basic Nightwatch configuration happens through a json configuration file.

Let's create a nightwatch.json file, and fill it with:

  "src_folders" : ["tests"],
  "output_folder" : "reports",

  "selenium" : {
    "start_process" : true,
    "server_path" : "./bin/selenium-server-standalone-3.3.1.jar",
    "log_path" : "",
    "port" : 4444,
    "cli_args" : {
      "" : "./bin/chromedriver"

  "test_settings" : {
    "default" : {
      "launch_url" : "http://localhost",
      "selenium_port"  : 4444,
      "selenium_host"  : "localhost",
      "desiredCapabilities": {
        "browserName": "chrome",
        "javascriptEnabled": true,
        "acceptSslCerts": true

With this configuration file, we told Nightwatch where can it find the binary of the Selenium server and the Chromedriver, as well as the location of the tests we want to run.

You shouldn't rely only on e2e testing for QA. Trace helps you to find all issues before your users do.

Node.js monitoring & debugging from the experts of RisingStack
Learn more

Quick Recap

So far, we have installed Nightwatch, downloaded the standalone Selenium server, as well as the Chromedriver. With these steps, you have all the necessary tools to create end-to-end tests using Node.js and Selenium.

Writing your first Nightwatch Test

Let's add a new file in the tests folder, called homepage.js.

We are going to take the example from the Nightwatch getting started guide. Our test script will go to Google, search for Rembrandt, and check the Wikipedia page:

module.exports = {  
  'Demo test Google' : function (client) {
      .waitForElementVisible('body', 1000)
      .setValue('input[type=text]', 'rembrandt van rijn')
      .waitForElementVisible('button[name=btnG]', 1000)
      .assert.containsText('ol#rso li:first-child',
        'Rembrandt - Wikipedia')

The only thing left to do is to run Nightwatch itself! For that, I recommend adding a new script into our package.json's scripts section:

"scripts": {
  "test-e2e": "nightwatch"

The very last thing you have to do is to run the tests using this command:

npm run test-e2e  

If everything goes well, your test will open up Chrome, then Google and Wikipedia.

Nightwatch.js in Your Project

Now as you understood what end-to-end testing is, and how you can set up Nightwatch, it is time to start adding it to your project.

For that, you have to consider some aspects - but please note, that there are no silver bullets here. Depending on your business needs, you may answer the following questions differently:

  • Where should I run? On staging? On production? When don I build my containers?
  • What are the test scenarios I want to test?
  • When and who should write end-to-end tests?

Summary & Next Up

In this chapter of Node.js at Scale we have learned:

  • how to set up Nightwatch,
  • how to configure it to use a standalone Selenium server,
  • and how to write basic end-to-end tests.

In the next chapter, we are going to explore how you can monitor production Node.js infrastructures.

Digital Transformation with the Node.js Stack

In this article, we explore the 9 main areas of digital transformation and show what are the benefits of implementing Node.js. At the end, we’ll lay out a Digital Transformation Roadmap to help you get started with this process.

Note, that implementing Node.js is not the goal of a digital transformation project - it is just a great tool that opens up possibilities that any organization can take advantage of.

Digital transformation is achieved by using modern technology to radically improve the performance of business processes, applications, or even whole enterprises.

One of the available technologies that enable companies to go through a major performance shift is Node.js and its ecosystem. It is a tool that grants improvement opportunities that organizations should take advantage of:

  • Increased developer productivity,
  • DevOps or NoOps practices,
  • and shipping software to production in brief time using the proxy approach,

just to mention a few.

The 9 Areas of Digital Transformation

Digital Transformation projects can improve a company in nine main areas. The following elements were identified as a result of an MIT research on digital transformation, where they interviewed 157 executives from 50 companies (typically $1 billion or more in annual sales).

#1. Understanding your Customers Better

Companies are heavily investing in systems to understand specific market segments and geographies better. They have to figure out what leads to customer happiness and customer dissatisfaction.

Many enterprises are building analytics capabilities to better understand their customers. Information derived this way can be used for data-driven decisions.

#2. Achieving Top-Line Growth

Digital transformation can also be used to enhance in-person sales conversations. Instead of paper-based presentations or slides, salespeople can use great looking, interactive presentations, like tablet-based presentations.

Understanding customers better helps enterprises to transform and improve the sales experience with more personalized sales and customer service.

#3. Building Better Customer Touch Points

Customer service can be improved tremendously with new digital services. For example, by introducing new channels for the communication. Instead of going to a local branch of a business, customers can talk to support through Twitter of Facebook.

Self-service digital tools can be developed which both save time for the customer while saving money for the company.

#4. Process Digitization

With automation companies can focus their employees on more strategic tasks, innovation or creativity rather than repetitive efforts.

#5. Worker Enablement

Virtualization of individual work (the work process is separated from the location of work) have become enablers for knowledge sharing. Information and expertise is accessible in real-time for frontline employees.

#6. Data-Driven Performance Management

With the proper analytical capabilities, decisions can be made on real data and not on assumptions.

Digital transformation is changing the way how strategic decisions are made. With new tools strategic planning sessions can include more stakeholders, not just a small group.

#7. Digitally Extended Businesses

Many companies extend their physical offerings with digital ones. Examples include:

  • news outlets augmenting their print offering with digital content,
  • FMCG companies extending to e-commerce.

#8. New Digital Businesses

Companies not just extend their current offerings with digital transformation, but also coming up with new digital products that complement the traditional ones. Examples may include connected devices, like GPS trackers that can now report activity to the cloud and provide value to the customers through recommendations.

#9. Digital Globalization

Global shared services, like shared finance or HR enable organizations to build truly global operations.

The Digital Transformation Benefits of Implementing Node.js

Organizations are looking for the most effective way of digital transformation - among these companies, Node.js is becoming the de facto technology for building out digital capabilities.

Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.

In other words: Node.js offers you the possibility to write servers using JavaScript with an incredible performance.

Increased Developer Productivity with Same-Language Stacks

When PayPal started using Node.js, they reported an 2x increase in productivity compared to the previous Java stack. How is that even possible?

NPM, the Node.js package manager, has an incredible amount of modules that can be used instantly. This saves a lot of development effort for the development team.

Secondly, as Node.js applications are written using JavaScript, front-end developers can also easily understand what's going on and make necessary changes.

This saves you valuable time again as developers will use the same language on the entire stack.

100% Business Availability, even with Extreme Load

Around 1.5 billion dollars are being spent online in the US on a single day on Black Friday, each year.

It is crucial that your site can keep up with the traffic. This is why Walmart, one of the biggest retailers is using Node.js to serve 500 million pageviews on Black Friday, without a hitch.

Fast Apps = Satisfied Customers

As your velocity increases because of the productivity gains, you can ship features/products sooner. Products that run faster result in better user experience.

Kissmetric's study showed that 40% of people abandon a website that takes more than 3 seconds to load, and 47% of consumers expect a web page to load in 2 seconds or less.

To read more on the benefits of using Node.js, you can download our Node.js is Enterprise Ready ebook.

Your Digital Transformation Roadmap with Node.js

As with most new technologies introduced to a company, it’s worth taking baby-steps first with Node.js as well. As a short framework for introducing Node.js, we recommend the following steps:

  • building a Node.js core team,
  • picking a small part of the application to be rewritten/extended using Node.js,
  • extending the scope of the project to the whole organization.

Step 1 - Building your Node.js Core Team

The core Node.js team will consist of people with JavaScript experience for both the backend and the frontend. It’s not crucial that the backend engineers have any Node.js experience, the important aspect is the vision they bring to the team.

Introducing Node.js is not just about JavaScript - it has to include members of the operations team as well, joining the core team.

The introduction of Node.js to an organization does not stop at excelling Node.js - it also means adding modern DevOps or NoOps practices, including but not limited to continuous integration and delivery.

Step 2 - Embracing The Proxy Approach

To incrementally replace old systems or to extend their functionality easily, your team can use the proxy approach.

For the features or functions you want to replace, create a small and simple Node.js application and proxy some of your load to the newly built Node.js application. This proxy does not necessarily have to be written in Node.js. With this approach, you can easily benefit from modularized, service-oriented architecture.

Another way to use proxies is to write them in Node.js and make them to talk with the legacy systems. This way you have the option to optimize the data sent being sent. PayPal was one of the first adopter of Node.js at scale, and they started with this proxy approach as well.

The biggest advantages of these solutions are that you can put Node.js into production in a short amount of time, measure your results, and learn from them.

Step 3 - Measure Node.js, Be Data-Driven

For the successful introduction of Node.js during a digital transformation project, it is crucial to set up a series of benchmarks to compare the results between the legacy system and the new Node.js applications. These data points can be response times, throughput or memory and CPU usage.

Orchestrating The Node.js Stack

As mentioned previously, introducing Node.js does not stop at excelling Node.js itself, but introducing continuous integration and delivery are crucial points as well.

Also, from an operations point of view, it is important to add containers to ship applications with confidence.

For orchestration, to operate the containers containing the Node.js applications we encourage companies to adopt Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications.

RisingStack and Digital Transformation with Node.js

RisingStack enables amazing companies to succeed with Node.js and related technologies to stay ahead of the competition. We provide professional Node.js development and consulting services from the early days of Node.js, and help companies like Lufthansa or Cisco to thrive with this technology.