Expert Node.js Support
Learn more


Node.js Tutorial Videos: Debugging, Async, Memory Leaks, CPU Profiling

Node.js Tutorial Videos: Debugging, Async, Memory Leaks, CPU Profiling

At RisingStack, we're continuously working on delivering Node.js tutorials to help developers overcome their biggest obstacles, and become even better, week-by-week.

In our recent Node.js survey we've been told that Debugging, understanding/using Async programming, handling callbacks and memory leaks are amongst the greatest pain-points one would face on her/his journey to become a Node Hero.

This is why we came up with the idea of a new video tutorial series called Owning Node.js

In this three-part video series, we're going through all of these topics in a detailed way - by showing and explaining the actual coding process to you.

All of the videos are captioned, so you'll have no problem with understanding what's going on by enabling the subtitles!

So, let's start Owning Node.js together!

Node.js Debugging Made Easy

In this very first video, I'm going to show you how to use the debug module, the built-in debugger, and Chrome DevTools to find and fix issues easily!

Node.js Async Programming Done Right

In the second Node.js tutorial video, I'm going to show you how you can handle asynchronous operations easily, and how you can do performant applications in Node.js using them!

In this 3-part video series @RisingStack explains #nodejs debugging, #async, memory leaks and CPU profiling.

Click To Tweet

So, we are going to take a look at error handling with asynchronous operations, and learn how you can use the async module to handle multiple callbacks at the same time.

CPU and Memory Profiling with Node.js

In the 3rd Node.js tutorial of the series I teach you how to create CPU profiles and Memory Heapdumps, and how to analyze them in the Chrome DevTools profiler. You'll learn detecting memory leaks and bottlenecks easily.

More Node.js tutorials: Announcing the Node Hero Program

I hope these videos made things clearer! If you'd like to keep getting better, I've got good news for you!

We're launching the NODE HERO program as of today, which contains further webinars and screencasts, live-coding sessions and access to our Node.js Debugging and Monitoring solution called Trace.

I highly recommend to check it out, if you'd like to become an even better Node.js developer! See you there!

Node Hero - Monitoring Node.js Applications

Node Hero - Monitoring Node.js Applications

This article is the 13th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the last article of the series, I’m going to show you how to do Node.js monitoring and how to find advanced issues in production environments.

The Importance of Node.js Monitoring

Getting insights into production systems is critical when you are building Node.js applications! You have an obligation to constantly detect bottlenecks and figure out what slows your product down.

An even greater issue is to handle and preempt downtimes. You must be notified as soon as they happen, preferably before your customers start to complain. Based on these needs, proper monitoring should give you at least the following features and insights into your application's behavior:

  • Profiling on a code level: You have to understand how much time does it take to run each function in a production environment, not just locally.

  • Monitoring network connections: If you are building a microservices architecture, you have to monitor network connections and lower delays in the communication between your services.

  • Performance dashboard: Knowing and constantly seeing the most important performance metrics of your application is essential to have a fast, stable production system.

  • Real-time alerting: For obvious reasons, if anything goes down, you need to get notified immediately. This means that you need tools that can integrate with Pagerduty or Opsgenie - so your DevOps team won’t miss anything important.

"Getting insights into production systems is critical when you are building #nodejs applications" via @RisingStack

Click To Tweet

Server Monitoring versus Application Monitoring

One concept developers usually apt to confuse is monitoring servers and monitoring the applications themselves. As we tend to do a lot of virtualization, these concepts should be treated separately, as a single server can host dozens of applications.

Let’s go trough the major differences!

Server Monitoring

Server monitoring is responsible for the host machine. It should be able to help you answer the following questions:

  • Does my server have enough disk space?
  • Does it have enough CPU time?
  • Does my server have enough memory?
  • Can it reach the network?

For server monitoring, you can use tools like zabbix.

Application Monitoring

Application monitoring, on the other hand, is responsible for the health of a given application instance. It should let you know the answers to the following questions:

  • Can an instance reach the database?
  • How much request does it handle?
  • What are the response times for the individual instances?
  • Can my application serve requests? Is it up?

For application monitoring, I recommend using our tool called Trace. What else? :)

We developed it to be an easy to use and efficient tool that you can use to monitor and debug applications from the moment you start building them, up to the point when you have a huge production app with hundreds of services.

How to Use Trace for Node.js Monitoring

To get started with Trace, head over to and create your free account!

Once you registered, follow these steps to add Trace to your Node.js applications. It only takes up a minute - and these are the steps you should perform:

Start Node.js monitoring with these steps

Easy, right? If everything went well, you should see that the service you connected has just started sending data to Trace:

Reporting service in Trace for Node.js Monitoring

#1: Measure your performance

As the first step of monitoring your Node.js application, I recommend to head over to the metrics page and check out the performance of your services.

Basic Node.js performance metrics

  • You can use the response time panel to check out median and 95th percentile response data. It helps you to figure out when and why your application slows down and how it affects your users.
  • The throughput graph shows request per minutes (rpm) for status code categories (200-299 // 300-399 // 400-499 // >500 ). This way you can easily separate healthy and problematic HTTP requests within your application.
  • The memory usage graph shows how much memory your process uses. It’s quite useful for recognizing memory leaks and preempting crashes.

Advanced Node.js Monitoring Metrics

If you’d like to see special Node.js metrics, check out the garbage collection and event loop graphs. Those can help you to hunt down memory leaks. Read our metrics documentation.

#2: Set up alerts

As I mentioned earlier, you need a proper alerting system in action for your production application.

Go the alerting page of Trace and click on Create a new alert.

  • The most important thing to do here is to set up downtime and memory alerts. Trace will notify you on email / Slack / Pagerduty / Opsgenie, and you can use Webhooks as well.

  • I recommend setting up the alert we call Error rate by status code to know about HTTP requests with 4XX or 5XX status codes. These are errors you should definitely care about.

  • It can also be useful to create an alert for Response time - and get notified when your app starts to slow down.

#3: Investigate memory heapdumps

Go to the Profiler page and request a new memory heapdump, wait 5 minutes and request another. Download them and open them on Chrome DevTool’s Profiles page. Select the second one (the most recent one), and click Comparison.

chrome heap snapshot for finding a node.js memory leak

With this view, you can easily find memory leaks in your application. In a previous article I’ve written about this process in a detailed way, you can read it here: Hunting a Ghost - Finding a Memory Leak in Node.js

#4: CPU profiling

Profiling on the code level is essential to understand how much time does your function take to run in the actual production environment. Luckily, Trace has this area covered too.

All you have to do is to head over to the CPU Profiles tab on the Profiling page. Here you can request and download a profile which you can load into the Chrome DevTool as well.

CPU profiling in Trace

Once you loaded it, you'll be able to see the 10 second timeframe of your application and see all of your functions with times and URL's as well.

With this data, you'll be able to figure out what slows down your application and deal with it!

Download the whole Node Hero series as a single pdf

The End

Update: as a sequel to Node Hero, we have started a new series called Node.js at Scale. Check it out if you are interested in more in-depth articles!

This is it.

During the 13 episodes of the Node Hero series, you learned the basics of building great applications with Node.js.

I hope you enjoyed it and improved a lot! Please share this series with your friends if you think they need it as well - and show them Trace too. It’s a great tool for Node.js development!

If you have any questions regarding Node.js monitoring, let me know in the comments section!

Node Hero - How to Deploy Node.js with Heroku or Docker

Node Hero - How to Deploy Node.js with Heroku or Docker

This article is the 12th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this Node.js deployment tutorial, you are going to learn how to deploy Node.js applications to either a PaaS provider (Heroku) or with using Docker.

Deploy Node.js to a PaaS

Platform-as-a-Service providers can be a great fit for teams who want to do zero operations or create small applications.

In this part of the tutorial, you are going to learn how to use Heroku to deploy your Node.js applications with ease.

"Heroku can be a great fit for teams who want to do zero ops or create small apps" via @RisingStack #nodejs #heroku

Click To Tweet

Prerequisites for Heroku

To deploy to Heroku, we have to push code to a remote git repository. To achieve this, add your public key to Heroku. After registration, head over to your account and save it there (alternatively, you can do it with the CLI).

We will also need to download and install the Heroku toolbelt. To verify that your installation was successful, run the following command in your terminal:

heroku --version  
heroku-toolbelt/3.40.11 (x86_64-darwin10.8.0) ruby/1.9.3  

Once the toolbelt is up and running, log in to use it:

heroku login  
Enter your Heroku credentials.  
Email: [email protected]  

(For more information on the toolkit, head over to the Heroku Devcenter)

Deploying to Heroku

Create a new app on Heroku to deploy Node.js

Click Create New App, add a new and select a region. In a matter of seconds, your application will be ready, and the following screen will welcome you:

Heroku Welcome Screen

Go to the Settings page of the application, and grab the Git URL. In your terminal, add the Heroku remote url:

git remote add heroku HEROKU_URL  

You are ready to deploy your first application to Heroku - it is really just a git push away:

git push heroku master  

Once you do this, Heroku starts building your application and deploys it as well. After the deployment, your service will be reachable at

Heroku Add-ons

One of the most valuable part of Heroku is its ecosystem since there are dozens of partners providing databases, monitoring tools, and other solutions.

To try out an add-on, install Trace, our Node.js monitoring solution. To do so, look for Add-ons on your application's page, and start typing Trace, then click on it to provision. Easy, right?

Heroku addons

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant Heroku applications using Trace
Learn more

(To finish the Trace integration, follow our Heroku guide.)

Deploy Node.js using Docker

In the past years Docker gained a massive momentum and became the go-to containerization software.

"Docker for Node.js is a great choice if you want more control and save on costs" via @RisingStack #nodejs #docker

Click To Tweet

In this part of the tutorial, you are going to learn how to create images from your Node.js applications and run them.

Docker Basics

To get started with Docker, download and install it from the Docker website.

Putting a Node.js application inside Docker

First, we have to get two definitions right:

  • Dockerfile: you can think of the Dockerfile as a receipt - it includes instructions on how to create a Docker image
  • Docker image: the output of the Dockerfile run - this is the runnable unit

In order to run an application inside Docker, we have to write the Dockerfile first.

Dockerfile for Node.js

In the root folder of your project, create a Dockerfile, an empty text file, then paste the following code into it:

FROM risingstack/alpine:3.3-v4.2.6-1.1.3

COPY package.json package.json  
RUN npm install

# Add your source files
COPY . .  
CMD ["npm","start"]  

Things to notice here:

  • FROM: describes the base image used to create a new image - in this case it is from the public Docker Hub
  • COPY: this command copies the package.json file to the Docker image so that we can run npm install inside
  • RUN: this runs commands, in this case npm install
  • COPY again - note, that we have done the copies in two separate steps. The reason is, that Docker creates layers from the command results, so if our package.json is not changing, it won't do npm install again
  • CMD: a Docker image can only have one CMD - this defines what process should be started with the image

Once you have the Dockerfile, you can create an image from it using:

docker build .  

Using private NPM modules? Check out our tutorial on how to install private NPM modules in Docker!

After the successful build of your image, you can list them with:

docker images  

To run an image:

docker run IMAGE_ID  

Congratulations! You have just run a Dockerized Node.js application locally. Time to deploy it!

Deploying Docker Images

One of the great things about Docker is that once you have a build image, you can run it everywhere - most environments will just simply docker pull your image, and run it.

Some providers that you can try:

  • AWS BeanStalk
  • Heroku Docker Support
  • Docker Cloud
  • Kubernetes on Google Cloud - (I highly recommend to read our article on moving to Kubernetes from our PaaS provider)

Setting them up is very straightforward - if you run into any problems, feel free to ask in the comments section!

Download the whole Node Hero series as a single pdf

Next up

In the next chapter of Node Hero, you are going to learn how to monitor your Node.js applications - so that it can be online 24/7.

If you have any questions or recommendations for this topic, write them in the comments section.

Node Hero - Node.js Security Tutorial

Node Hero - Node.js Security Tutorial

This article is the 11th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this Node.js security tutorial, you are going to learn how to defend your applications against the most common attack vectors.

Node.js Security threats

Nowadays we see almost every week some serious security breaches, like in the LinkedIn or MySpace cases. During these attacks, a huge amount of user data was leaked - as well as corporate reputations damaged.

Studies also show that security related bug tickets are open for an average of 18 months in some industries.

We have to fix this attitude. If you develop software, security is a part of your job.

"Fact: Security is a part of your job as a Node.js developer." via @RisingStack #nodejs #security

Click To Tweet

Start the Node.js Security Tutorial

Let's get started, and secure our Node.js application by proper coding, tooling, and operation!

Secure Coding Style

Rule 1: Don't use eval

Eval can open up your application for code injection attacks. Try not to use it, but if you have to, never inject unvalidated user input into eval.

Eval is not the only one you should avoid - in the background each one of the following expressions uses eval:

  • setInterval(String, 2)
  • setTimeout(String, 2)
  • new Function(String)

Rule 2: Always use strict mode

With 'use strict' you can opt in to use a restricted "variant" of JavaScript. It eliminates some silent errors and will throw them all the time.

'use strict'  
delete Object.prototype  
// TypeError
var obj = {  
    a: 1, 
    a: 2 
// syntax error

Rule 3: Handle errors carefully

During different error scenarios, your application may leak sensitive details about the underlying infrastructure, like: X-Powered-By:Express.

Stack traces are not treated as vulnerabilities by themselves, but they often reveal information that can be interesting to an attacker. Providing debugging information as a result of operations that generate errors is considered a bad practice. You should always log them, but never show them to the users.

"Always log errors, but never show them to users." via @RisingStack #nodejs #security

Click To Tweet

Rule 4: Do a static analysis of your codebase

Static analysis of your application's codebase can catch a lot of errors. For that we suggest using ESLint with the Standard code style.

Running Your Services in Production Securely

Using proper code style is not enough to efficiently secure Node.js applications - you should also be careful about how you run your services in production.

Rule 5: Don't run your processes with superuser rights

Sadly, we see this a lot: developers are running their Node.js application with superuser rights, as they want it to listen on port 80 or 443.

This is just wrong. In the case of an error/bug, your process can bring down the entire system, as it has credentials to do anything.

Instead of this, what you can do is to set up an HTTP server/proxy to forward the requests. This can be nginx or Apache. Check out our article on Operating Node.js in Production to learn more.

Rule 6: Set up the obligatory HTTP headers

There are some security-related HTTP headers that your site should set. These headers are:

  • Strict-Transport-Security enforces secure (HTTP over SSL/TLS) connections to the server
  • X-Frame-Options provides clickjacking protection
  • X-XSS-Protection enables the Cross-site scripting (XSS) filter built into most recent web browsers
  • X-Content-Type-Options prevents browsers from MIME-sniffing a response away from the declared content-type
  • Content-Security-Policy prevents a wide range of attacks, including Cross-site scripting and other cross-site injections

In Node.js it is easy to set these using the Helmet module:

var express = require('express')  
var helmet = require('helmet')

var app = express()


Helmet is available for Koa as well: koa-helmet.

Rule 7: Do proper session management

The following list of flags should be set for each cookie:

  • secure - this attribute tells the browser to only send the cookie if the request is being sent over HTTPS.
  • HttpOnly - this attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via JavaScript.

Rule 8: Set cookie scope

  • domain - this attribute is used to compare against the domain of the server in which the URL is being requested. If the domain matches or if it is a sub-domain, then the path attribute will be checked next.
  • path - in addition to the domain, the URL path that the cookie is valid for can be specified. If the domain and path match, then the cookie will be sent in the request.
  • expires - this attribute is used to set persistent cookies since the cookie does not expire until the set date is exceeded.

In Node.js you can easily create this cookie using the cookies package. Again, this is quite low
-level, so you will probably end up using a wrapper, like the cookie-session.

var cookieSession = require('cookie-session')  
var express = require('express')

var app = express()

  name: 'session',
  keys: [

app.use(function (req, res, next) {  
  var n = req.session.views || 0
  req.session.views = n++
  res.end(n + ' views')


(The example is taken from the cookie-session module documentation.)

Tools to Use

Congrats, you’re almost there! If you followed this tutorial and did the previous steps thoroughly, you have just one area left to cover regarding Node.js security. Let’s dive into using the proper tools to look for module vulnerabilities!

Node.js Monitoring and Debugging from the Experts of RisingStack

Check if you have modules with known security issues using Trace
Learn more

"Always look for vulnerabilities in your #nodejs modules. You are what you require." via @RisingStack #security

Click To Tweet

Rule 9: Look for vulnerabilities with Retire.js

The goal of Retire.js is to help you detect the use of module versions with known vulnerabilities.

Simply install with:

npm install -g retire  

After that, running it with the retire command will look for vulnerabilities in your node_modules directory. (Also note, that retire.js works not only with node modules but with front end libraries as well.)

Rule 10: Audit your modules with the Node Security Platform CLI

nsp is the main command line interface to the Node Security Platform. It allows for auditing a package.json or npm-shrinkwrap.json file against the NSP API to check for vulnerable modules.

npm install nsp --global  
# From inside your project directory
nsp check  

Download the whole Node Hero series as a single pdf

Next up

Node.js security is not a big deal after all is it? I hope you found these rules to be helpful for securing your Node.js applications - and will follow them in the future since security is a part of your job!

If you’d like to read more on Node.js security, I can recommend these articles to start with:

In the next chapter of Node Hero, you are going to learn how to deploy your secured Node.js application, so people can actually start using it!

If you have any questions or recommendations for this topic, write them in the comments section.

(twitter card photo credit:

Node Hero - Debugging Node.js Applications

Node Hero - Debugging Node.js Applications

This article is the 10th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this tutorial, you are going to learn debugging your Node.js applications using the debug module, the built-in Node debugger and Chrome's developer tools.

Bugs, debugging

The term bug and debugging have been a part of engineering jargon for many decades. One of the first written mentions of bugs is as follows:

It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that "Bugs" — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.

Thomas Edison

Debugging Node.js Applications

One of the most frequently used approach to find issues in Node.js applications is the heavy usage of console.log for debugging.

"Console.log is efficient for debugging small snippets but we recommend better alternatives!” @RisingStack #nodejs

Click To Tweet

Let's take a look at them!

The debug module

Some of the most popular modules that you can require into your project come with the debug module. With this module, you can enable third-party modules to log to the standard output, stdout. To check whether a module is using it, take a look at the package.json file's dependency section.

To use the debug module, you have to set the DEBUG environment variable when starting your applications. You can also use the * character to wildcard names. The following line will print all the express related logs to the standard output.

DEBUG=express* node app.js  

The output will look like this:

Logfile made with the Node debug module

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

The Built-in Node.js Debugger

Node.js includes a full-featured out-of-process debugging utility accessible via a simple TCP-based protocol and built-in debugging client.

To start the built-in debugger you have to start your application this way:

node debug app.js  

Once you have done that, you will see something like this:

using the built-in node.js debugger

Basic Usage of the Node Debugger

To navigate this interface, you can use the following commands:

  • c => continue with code execution
  • n => execute this line and go to next line
  • s => step into this function
  • o => finish function execution and step out
  • repl => allows code to be evaluated remotely

You can add breakpoints to your applications by inserting the debugger statement into your codebase.

function add (a, b) {  
  return a + b

var res = add('apple', 4)  


It is possible to watch expression and variable values during debugging. On every breakpoint, each expression from the watchers list will be evaluated in the current context and displayed immediately before the breakpoint's source code listing.

To start using watchers, you have to define them for the expressions you want to watch. To do so, you have to do it this way:


To get a list of active watchers type watchers, to unwatch an expression use unwatch('expression').

Pro tip: you can switch running Node.js processes into debug mode by sending the SIGUSR1 command to them. After that you can connect the debugger with node debug -p <pid>.

"You can switch running #nodejs processes into debug mode by sending the SIGUSR1 command to them.” via @RisingStack

Click To Tweet

To understand the full capabilities of the built-in debugger, check out the official API docs:

The Chrome Debugger

When you start debugging complex applications, something visual can help. Wouldn’t be great to use the familiar UI of the Chrome DevTools for debugging Node.js applications as well?

node.js chrome developer tools debugging

Good news, the Chrome debug protocol is already ported into a Node.js module and can be used to debug Node.js applications.

To start using it, you have to install node-inspector first:

npm install -g node-inspector  

Once you installed it, you can start debugging your applications by starting them this way:

node-debug index.js --debug-brk  

(the --debug-brk pauses the execution on the first line)

It will open up the Chrome Developer tools and you can start to debug your Node.js applications with it.

Download the whole Node Hero series as a single pdf

Next up

Debugging is not that hard after all, is it?

In the next chapter of Node Hero, you are going to learn how to secure your Node.js applications.

If you have any questions or recommendations for this topic, write them in the comments section.

Node Hero - Node.js Unit Testing Tutorial

Node Hero - Node.js Unit Testing Tutorial

This article is the 9th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this tutorial, you are going to learn what is unit testing in Node.js, and how to test your applications properly.

Testing Node.js Applications

You can think of tests as safeguards for the applications you are building. They will run not just on your local machine, but also on the CI services so that failing builds won't get pushed to production systems.

"Tests are more than just safeguards - they provide a living documentation for your codebase.” via @RisingStack

Click To Tweet

You may ask: what should I test in my application? How many tests should I have?

The answer varies across use-cases, but as a rule of thumb, you can follow the guidelines set by the test pyramid.

Test Pyramid for Node.js Unit Testing

Essentially, the test pyramid describes that you should write unit tests, integration tests and end-to-end tests as well. You should have more integration tests than end-to-end tests, and even more unit tests.

Let's take a look at how you can add unit tests for your applications!

Please note, that we are not going to talk about integration tests and end-to-end tests here as they are far beyond the scope of this tutorial.

Unit Testing Node.js Applications

We write unit tests to see if a given module (unit) works. All the dependencies are stubbed, meaning we are providing fake dependencies for a module.

You should write the test for the exposed methods, not for the internal workings of the given module.

The Anatomy of a Unit Test

Each unit test has the following structure:

  1. Test setup
  2. Calling the tested method
  3. Asserting

Each unit test should test one concern only. (Of course this doesn't mean that you can add one assertion only).

Modules Used for Node.js Unit Testing

For unit testing, we are going to use the following modules:

  • test runner: mocha, alternatively tape
  • assertion library: chai, alternatively the assert module (for asserting)
  • test spies, stubs and mocks: sinon (for test setup).

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Spies, stubs and mocks - which one and when?

Before doing some hands-on unit testing, let's take a look at what spies, stubs and mocks are!


You can use spies to get information on function calls, like how many times they were called, or what arguments were passed to them.

it('calls subscribers on publish', function () {  
  var callback = sinon.spy()
  PubSub.subscribe('message', callback)


// example taken from the sinon documentation site:

Stubs are like spies, but they replace the target function. You can use stubs to control a method's behaviour to force a code path (like throwing errors) or to prevent calls to external resources (like HTTP APIs).

it('calls all subscribers, even if there are exceptions', function (){  
  var message = 'an example message'
  var error = 'an example error message'
  var stub = sinon.stub().throws()
  var spy1 = sinon.spy()
  var spy2 = sinon.spy()

  PubSub.subscribe(message, stub)
  PubSub.subscribe(message, spy1)
  PubSub.subscribe(message, spy2)

  PubSub.publishSync(message, undefined)

// example taken from the sinon documentation site:

A mock is a fake method with a pre-programmed behavior and expectations.

it('calls all subscribers when exceptions happen', function () {  
  var myAPI = { 
    method: function () {} 

  var spy = sinon.spy()
  var mock = sinon.mock(myAPI)

  PubSub.subscribe("message", myAPI.method)
  PubSub.subscribe("message", spy)
  PubSub.publishSync("message", undefined)

// example taken from the sinon documentation site:

As you can see, for mocks you have to define the expectations upfront.

Imagine, that you'd like to test the following module:

const fs = require('fs')  
const request = require('request')

function saveWebpage (url, filePath) {  
  return getWebpage(url, filePath)

function getWebpage (url) {  
  return new Promise (function (resolve, reject) {
    request.get(url, function (err, response, body) {
      if (err) {
        return reject(err)


function writeFile (fileContent) {  
  let filePath = 'page'
  return new Promise (function (resolve, reject) {
    fs.writeFile(filePath, fileContent, function (err) {
      if (err) {
        return reject(err)


module.exports = {  

This module does one thing: it saves a web page (based on the given URL) to a file on the local machine. To test this module we have to stub out both the fs module as well as the request module.

Before actually starting to write the unit tests for this module, at RisingStack, we usually add a test-setup.spec.js file to do basics test setup, like creating sinon sandboxes. This saves you from writing sinon.sandbox.create() and sinon.sandbox.restore() after each tests.

// test-setup.spec.js
const sinon = require('sinon')  
const chai = require('chai')

beforeEach(function () {  
  this.sandbox = sinon.sandbox.create()

afterEach(function () {  

Also, please note, that we always put test files next to the implementation, hence the .spec.js name. In our package.json you can find these lines:

  "test-unit": "NODE_ENV=test mocha '/**/*.spec.js'",

Once we have these setups, it is time to write the tests itself!

const fs = require('fs')  
const request = require('request')

const expect = require('chai').expect

const webpage = require('./webpage')

describe('The webpage module', function () {  
  it('saves the content', function * () {
    const url = ''
    const content = '<h1>title</h1>'
    const writeFileStub = this.sandbox.stub(fs, 'writeFile', function (filePath, fileContent, cb) {

    const requestStub = this.sandbox.stub(request, 'get', function (url, cb) {
      cb(null, null, content)

    const result = yield webpage.saveWebpage(url)


The full codebase can be found here:

Code coverage

To get a better idea of how well your codebase is covered with tests, you can generate a coverage report.

This report will include metrics on:

  • line coverage,
  • statement coverage,
  • branch coverage,
  • and function coverage.

At RisingStack, we use istanbul for code coverage. You should add the following script to your package.json to use istanbul with mocha:

istanbul cover _mocha $(find ./lib -name \"*.spec.js\" -not -path \"./node_modules/*\")  

Once you do, you will get something like this:

Node.js Unit Testing Code Coverage

You can click around, and actually see your source code annotated - which part is tested, which part is not.

Download the whole Node Hero series as a single pdf

Next up

Testing can save you a lot of trouble - still, it is inevitable to also do debugging from time to time. In the next chapter of Node Hero, you are going to learn how to debug Node.js applications .

If you have any questions or recommendations for this topic, write them in the comments section.

Node Hero - Node.js Authentication using Passport.js

Node Hero - Node.js Authentication using Passport.js

This is the 8th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this tutorial, you are going to learn how to implement a local Node.js authentication strategy using Passport.js and Redis.

Technologies to use

Before jumping into the actual coding, let's take a look at the new technologies we are going to use in this chapter.

What is Passport.js?

Simple, unobtrusive authentication for Node.js -

Passport.js is an authentication middleware for Node.js

Passport is an authentication middleware for Node.js which we are going to use for session management.

What is Redis?

Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. -

We are going to store our user's session information in Redis, and not in the process's memory. This way our application will be a lot easier to scale.

The Demo Application

For demonstration purposes, let’s build an application that does only the following:

  • exposes a login form,
  • exposes two protected pages:
    • a profile page,
    • secured notes

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

The Project Structure

You have already learned how to structure Node.js projects in the previous chapter of Node Hero, so let's use that knowledge!

We are going to use the following structure:

├── app
|   ├── authentication
|   ├── note
|   ├── user
|   ├── index.js
|   └── layout.hbs
├── config
|   └── index.js
├── index.js
└── package.json

As you can see we will organize files and directories around features. We will have a user page, a note page, and some authentication related functionality.

(Download the full source code at

The Node.js Authentication Flow

Our goal is to implement the following authentication flow into our application:

  1. User enters username and password
  2. The application checks if they are matching
  3. If they are matching, it sends a Set-Cookie header that will be used to authenticate further pages
  4. When the user visits pages from the same domain, the previously set cookie will be added to all the requests
  5. Authenticate restricted pages with this cookie

To set up an authentication strategy like this, follow these three steps:

Step 1: Setting up Express

We are going to use Express for the server framework - you can learn more on the topic by reading our Express tutorial.

// file:app/index.js
const express = require('express')  
const passport = require('passport')  
const session = require('express-session')  
const RedisStore = require('connect-redis')(session)

const app = express()  
  store: new RedisStore({
    url: config.redisStore.url
  secret: config.redisStore.secret,
  resave: false,
  saveUninitialized: false

What did we do here?

First of all, we required all the dependencies that the session management needs. After that we have created a new instance from the express-session module, which will store our sessions.

For the backing store, we are using Redis, but you can use any other, like MySQL or MongoDB.

Step 2: Setting up Passport for Node.js

Passport is a great example of a library using plugins. For this tutorial, we are adding the passport-local module which enables easy integration of a simple local authentication strategy using usernames and passwords.

For the sake of simplicity, in this example, we are not using a second backing store, but only an in-memory user instance. In real life applications, the findUser would look up a user in a database.

// file:app/authenticate/init.js
const passport = require('passport')  
const LocalStrategy = require('passport-local').Strategy

const user = {  
  username: 'test-user',
  password: 'test-password',
  id: 1

passport.use(new LocalStrategy(  
  function(username, password, done) {
    findUser(username, function (err, user) {
      if (err) {
        return done(err)
      if (!user) {
        return done(null, false)
      if (password !== user.password  ) {
        return done(null, false)
      return done(null, user)

Once the findUser returns with our user object the only thing left is to compare the user-fed and the real password to see if there is a match.

If it is a match, we let the user in (by returning the user to passport - return done(null, user)), if not we return an unauthorized error (by returning nothing to passport - return done(null)).

Step 3: Adding Protected Endpoints

To add protected endpoints, we are leveraging the middleware pattern Express uses. For that, let's create the authentication middleware first:

// file:app/authentication/middleware.js
function authenticationMiddleware () {  
  return function (req, res, next) {
    if (req.isAuthenticated()) {
      return next()

It only has one role if the user is authenticated (has the right cookies) it simply calls the next middleware; otherwise it redirects to the page where the user can log in.

Using it is as easy as adding a new middleware to the route definition.

// file:app/user/init.js
const passport = require('passport')

app.get('/profile', passport.authenticationMiddleware(), renderProfile)  


"Setting up authentication for Node.js with Passport is a piece of cake!” via @RisingStack #nodejs

Click To Tweet

In this Node.js tutorial, you have learned how to add basic authentication to your application. Later on, you can extend it with different authentication strategies, like Facebook or Twitter. You can find more strategies at

The full, working example is on GitHub, you can take a look here:

Download the whole Node Hero series as a single pdf

Next up

The next chapter of Node Hero will be all about unit testing Node.js applications. You will learn concepts like unit testing, test pyramid, test doubles and a lot more!

Share your questions and feedbacks in the comment section.

Node Hero - Node.js Project Structure Tutorial

This is the 7th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

Most Node.js frameworks don't come with a fixed directory structure and it might be challenging to get it right from the beginning. In this tutorial, you will learn how to properly structure a Node.js project to avoid confusion when your applications start to grow.

UPDATE: We wrote another article about Node.js project structuring, which discusses advanced techniques as well.

The 5 fundamental rules of a Node.js Project Structure

There are a lot of possible ways to organize a Node.js project - and each of the known methods has their ups and downs. However, according to our experience, developers always want to achieve the same things: clean code and the possibility of adding new features with ease.

In the past years at RisingStack, we had a chance to build efficient Node applications in many sizes, and we gained numerous insights regarding the dos and donts of project structuring.

We have outlined five simple guiding rules which we enforce during Node.js development. If you manage to follow them, your projects will be fine:

Rule 1 - Organize your Files Around Features, Not Roles

Imagine, that you have the following directory structure:

// DON'T
├── controllers
|   ├── product.js
|   └── user.js
├── models
|   ├── product.js
|   └── user.js
├── views
|   ├── product.hbs
|   └── user.hbs

The problems with this approach are:

  • to understand how the product pages work, you have to open up three different directories, with lots of context switching,
  • you end up writing long paths when requiring modules: require('../../controllers/user.js')

"Rule 1: Organize your files around features, not roles!" via @risingstack

Click To Tweet

Instead of this, you can structure your Node.js applications around product features / pages / components. It makes understanding a lot easier:

// DO
├── product
|   ├── index.js
|   ├── product.js
|   └── product.hbs
├── user
|   ├── index.js
|   ├── user.js
|   └── user.hbs

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Rule 2 - Don't Put Logic in index.js Files

Use these files only to export functionality, like:

// product/index.js
var product = require('./product')

module.exports = {  
  create: product.create

Rule 3 - Place Your Test Files Next to The Implementation

Tests are not just for checking whether a module produces the expected output, they also document your modules (you will learn more on testing in the upcoming chapters). Because of this, it is easier to understand if test files are placed next to the implementation.

"Rule 3: Place your test files next to the implementation." via @risingstack

Click To Tweet

Put your additional test files to a separate test folder to avoid confusion.

├── test
|   └── setup.spec.js
├── product
|   ├── index.js
|   ├── product.js
|   ├── product.spec.js
|   └── product.hbs
├── user
|   ├── index.js
|   ├── user.js
|   ├── user.spec.js
|   └── user.hbs

Rule 4 - Use a config Directory

To place your configuration files, use a config directory.

├── config
|   ├── index.js
|   └── server.js
├── product
|   ├── index.js
|   ├── product.js
|   ├── product.spec.js
|   └── product.hbs

Rule 5 - Put Your Long npm Scripts in a scripts Directory

Create a separate directory for your additional long scripts in package.json

├── scripts
|   ├──
|   └──
├── product
|   ├── index.js
|   ├── product.js
|   ├── product.spec.js
|   └── product.hbs

Download the whole Node Hero series as a single pdf

Next up

In the next chapter of Node Hero, you are going to learn how to authenticate users using Passport.js. Until the next chapter comes out, feel free to ask any questions you ran into!

UPDATE: We wrote another article about Node.js project structuring, which discusses advanced techniques as well.

Node Hero - Node.js Request Module Tutorial

This is the 6th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the following tutorial, you will learn the basics of HTTP, and how you can fetch resources from external sources using the Node.js request module.

What's HTTP?

HTTP stands for Hypertext Transfer Protocol. HTTP functions as a request–response protocol in the client–server computing model.

HTTP Status Codes

Before diving into the communication with other APIs, let's review the HTTP status codes we may encounter during the process. They describe the outcome of our requests and are essential for error handling.

  • 1xx - Informational

  • 2xx - Success: These status codes indicate that our request was received and processed correctly. The most common success codes are 200 OK, 201 Created and 204 No Content.

  • 3xx - Redirection: This group shows that the client had to do an additional action to complete the request. The most common redirection codes are 301 Moved Permanently, 304 Not Modified.

  • 4xx - Client Error: This class of status codes is used when the request sent by the client was faulty in some way. The server response usually contains the explanation of the error. The most common client error codes are 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 409 Conflict. ​

  • 5xx - Server Error: These codes are sent when the server failed to fulfill a valid request due to some error. The cause may be a bug in the code or some temporary or permanent incapability. The most common server error codes are 500 Internal Server Error, 503 Service Unavailable. ​ If you'd like to learn more about HTTP status codes, you can find a detailed explanation about them here.

Sending Requests to External APIs

Connecting to external APIs is easy in Node. You can just require the core HTTP module and start sending requests.

Of course, there are much better ways to call an external endpoint. On NPM you can find multiple modules that can make this process easier for you. Fo​r example, the two most popular ones are the request and superagent modules.

Both of these modules have an error-first callback interface that can lead to some issues (I bet you've heard about Callback-Hell), but luckily we have access to the promise-wrapped versions.

Node.js Monitoring and Debugging from the Experts of RisingStack

See the communication between your services
Learn more

Using the Node.js Request Module

Using the request-promise module is simple. After installing it from NPM, you just have to require it:

const request = require('request-promise')  

​ Sending a GET request is as simple as:

const options = {  
  method: 'GET',
  uri: ''
  .then(function (response) {
    // Request was successful, use the response object at will
  .catch(function (err) {
    // Something bad happened, handle the error

If you are calling a JSON API, you may want the request-promise to parse the response automatically. In this case, just add this to the request options:

json: true  

​ POST requests work in a similar way:

const options = {  
  method: 'POST',
  uri: '',
  body: {
    foo: 'bar'
  json: true 
    // JSON stringifies the body automatically
  .then(function (response) {
    // Handle the response
  .catch(function (err) {
    // Deal with the error

​ To add query string parameters you just have to add the qs property to the options object:

const options = {  
  method: 'GET',
  uri: '',
  qs: {
    limit: 10,
    skip: 20,
    sort: 'asc'

This will make your request URL:
​ You can also define any header the same way we added the query parameters:

const options = {  
  method: 'GET',
  uri: '',
  headers: {
    'User-Agent': 'Request-Promise',
    'Authorization': 'Basic QWxhZGRpbjpPcGVuU2VzYW1l'

Error handling

Error handling is an essential part of making requests to external APIs, as we can never be sure what will happen to them. Apart from our client errors the server may respond with an error or just send data in a wrong or inconsistent format. Keep these in mind when you try handling the response. Also, using catch for every request is a good way to avoid the external service crashing our server.

Putting it together

As you have already learned how to spin up a Node.js HTTP server, how to render HTML pages, and how to get data from external APIs, it is time to put them together!

In this example, we are going to create a small Express application that can render the current weather conditions based on​ city names.

(To get your AccuWeather​ API key, please visit their developer site)

const express = require('express')  
const rp = require('request-promise')  
const exphbs = require('express-handlebars')

const app = express()

app.engine('.hbs', exphbs({  
  defaultLayout: 'main',
  extname: '.hbs',
  layoutsDir: path.join(__dirname, 'views/layouts')
app.set('view engine', '.hbs')  
app.set('views', path.join(__dirname, 'views'))

app.get('/:city', (req, res) => {  
    uri: '',
    qs: {
      apiKey: 'api-key'
         // Use your accuweather API key here
    json: true
    .then((data) => {
      res.render('index', data)
    .catch((err) => {


The example above does the following:

  • creates an Express server
  • sets up the handlebars structure - for the .hbs file please refer to the Node.js HTTP tutorial
  • sends a request to the external API
    • if everything is ok, it renders the page
    • otherwise, it shows the error page and logs the error

Download the whole Node Hero series as a single pdf

Next up

In the next chapter of Node Hero you are going to learn how to structure your Node.js projects correctly.

In the meantime try out integrating with different API providers and if you run into problems or questions, don't hesitate to share them in the comment section!

Node Hero - Node.js Database Tutorial

Node Hero - Node.js Database Tutorial

This is the 5th post of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the following Node.js database tutorial, I’ll show you how you can set up a Node.js application with a database, and teach you the basics of using it.

Storing data in a global variable

Serving static pages for users - as you have learned it in the previous chapter - can be suitable for landing pages, or for personal blogs. However, if you want to deliver personalized content you have to store the data somewhere.

Let’s take a simple example: user signup. You can serve customized content for individual users or make it available for them after identification only.

If a user wants to sign up for your application, you might want to create a route handler to make it possible:

const users = []'/users', function (req, res) {  
    // retrieve user posted data from the body
    const user = req.body
      age: user.age
    res.send('successfully registered')

This way you can store the users in a global variable, which will reside in memory for the lifetime of your application.

Using this method might be problematic for several reasons:

  • RAM is expensive,
  • memory resets each time you restart your application,
  • if you don't clean up, sometimes you'll end up with stack overflow.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Storing data in a file

The next thing that might come up in your mind is to store the data in files.

If we store our user data permanently on the file system, we can avoid the previously listed problems.

This method looks like the following in practice:

const fs = require('fs')'/users', function (req, res) {  
    const user = req.body
    fs.appendFile('users.txt', JSON.stringify({ name:, age: user.age }), (err) => {
        res.send('successfully registered')

This way we won’t lose user data, not even after a server reset. This solution is also cost efficient, since buying storage is cheaper than buying RAM.

Unfortunately storing user data this way still has a couple of flaws:

  • Appending is okay, but think about updating or deleting.
  • If we're working with files, there is no easy way to access them in parallel (system-wide locks will prevent you from writing).
  • When we try to scale our application up, we cannot split files (you can, but it is way beyond the level of this tutorial) in between servers.

This is where real databases come into play.

You might have already heard that there are two main kinds of databases: SQL and NoSQL.


Let's start with SQL. It is a query language designed to work with relational databases. SQL has a couple of flavors depending on the product you're using, but the fundamentals are same in each of them.

The data itself will be stored in tables, and each inserted piece will be represented as a row in the table, just like in Google Sheets, or Microsoft Excel.

Within an SQL database, you can define schemas - these schemas will provide a skeleton for the data you'll put in there. The types of the different values have to be set before you can store your data. For example, you'll have to define a table for your user data, and have to tell the database that it has a username which is a string, and age, which is an integer type.


On the other hand, NoSQL databases have become quite popular in the last decade. With NoSQL you don't have to define a schema and you can store any arbitrary JSON. This is handy with JavaScript because we can turn any object into a JSON pretty easily. Be careful, because you can never guarantee that the data is consistent, and you can never know what is in the database.

Node.js and MongoDB

There is a common misconception with Node.js what we hear all the time:

"Node.js can only be used with MongoDB (which is the most popular NoSQL database)."

According to my experience, this is not true. There are drivers available for most of the databases, and they also have libraries on NPM. In my opinion, they are as straightforward and easy to use as MongoDB.

Node.js and PostgreSQL

For the sake of simplicity, we're going to use SQL in the following example. My dialect of choice is PostgreSQL.

To have PostgreSQL up and running you have to install it on your computer. If you're on a Mac, you can use homebrew to install PostgreSQL. Otherwise, if you're on Linux, you can install it with your package manager of choice.

Node.js Database Example PostgreSQL

For further information read this excellent guide on getting your first database up and running.

If you're planning to use a database browser tool, I'd recommend the command line program called psql - it's bundled with the PostgreSQL server installation. Here's a small cheat sheet that will come handy if you start using it.

If you don't like the command-line interface, you can use pgAdmin which is an open source GUI tool for PostgreSQL administration.

Note that SQL is a language on its own, we won't cover all of its features, just the simpler ones. To learn more, there are a lot of great courses online that cover all the basics on PostgreSQL.

Node.js Database Interaction

First, we have to create the database we are going to use. To do so, enter the following command in the terminal: createdb node_hero

Then we have to create the table for our users.

  name VARCHAR(20),

Finally, we can get back to coding. Here is how you can interact with your database via your Node.js program.

'use strict'

const pg = require('pg')  
const conString = 'postgres://username:[email protected]/node_hero' // make sure to match your own database's credentials

pg.connect(conString, function (err, client, done) {  
  if (err) {
    return console.error('error fetching client from pool', err)
  client.query('SELECT $1::varchar AS my_first_query', ['node hero'], function (err, result) {

    if (err) {
      return console.error('error happened during query', err)

This was just a simple example, a 'hello world' in PostgreSQL. Notice that the first parameter is a string which is our SQL command, the second parameter is an array of values that we'd like to parameterize our query with.

It is a huge security error to insert user input into databases as they come in. This protects you from SQL Injection attacks, which is a kind of attack when the attacker tries to exploit severely sanitized SQL queries. Always take this into consideration when building any user facing application. To learn more, check out our Node.js Application Security checklist.

Let's continue with our previous example.'/users', function (req, res, next) {  
  const user = req.body

  pg.connect(conString, function (err, client, done) {
    if (err) {
      // pass the error to the express error handler
      return next(err)
    client.query('INSERT INTO users (name, age) VALUES ($1, $2);', [, user.age], function (err, result) {
      done() //this done callback signals the pg driver that the connection can be closed or returned to the connection pool

      if (err) {
        // pass the error to the express error handler
        return next(err)


Achievement unlocked: the user is stored in the database! :) Now let's try retrieving them. Next, let’s add a new endpoint to our application for user retrieval.

app.get('/users', function (req, res, next) {  
  pg.connect(conString, function (err, client, done) {
    if (err) {
      // pass the error to the express error handler
      return next(err)
    client.query('SELECT name, age FROM users;', [], function (err, result) {

      if (err) {
        // pass the error to the express error handler
        return next(err)


Download the whole Node Hero series as a single pdf

That wasn't that hard, was it?

Now you can run any complex SQL query that you can come up with within your Node.js application.

With this technique, you can store data persistently in your application, and thanks to the hard-working team of the node-postgres module, it is a piece of cake to do so.

We have gone through all the basics you have to know about using databases with Node.js. Now go, and create something yourself.

Try things out and experiment, because that's the best way of becoming a real Node Hero! Practice and be prepared for the next chapter on how to communicate with third-party APIs!

If you have any questions regarding this tutorial or about using Node.js with databases, don't hesitate to ask!