Trace Node.js Monitoring
Gergely Nemeth's Picture

Gergely Nemeth

Node.js and microservices, organizer of @Oneshotbudapest @nodebp @jsconfbp

69 posts

Yarn vs npm - The State of Node.js Package Managers

With the v7.4 release, npm 4 became the bundled, default package manager for Node.js. In the meantime, Facebook released their own package manager solution, called Yarn.

Let's take a look at the state of Node.js package managers, what they can do for you, and when you should pick which one!

Yarn - the new kid on the block

Fast, reliable and secure dependency management - this is the promise of Yarn, the new dependency manager created by the engineers of Facebook.

But can Yarn live up to the expectations?

Yarn - the node.js package manager

Installing Yarn

There are several ways of installing Yarn. If you have npm installed, you can just install Yarn with npm:

npm install yarn --global  

However, the recommended way by the Yarn team is to install it via your native OS package manager - if you are on a Mac, probably it will be brew:

brew update  
brew install yarn  

Yarn Under the Hood

Yarn has a lot of performance and security improvements under the hood. Let's see what these are!

Offline cache

When you install a package using Yarn (using yarn add packagename), it places the package on your disk. During the next install, this package will be used instead of sending an HTTP request to get the tarball from the registry.

Your cached module will be put into ~/.yarn-cache, and will be prefixed with the registry name, and postfixed with the modules version.

This means that if you install the 4.4.5 version of express with Yarn, it will be put into ~/.yarn-cache/npm-express-4.4.5.

Node.js Monitoring and Debugging from the Experts of RisingStack

Create performant applications using Trace
Learn more

Deterministic Installs

Yarn uses lockfiles (yarn.lock) and a deterministic install algorithm. We can say goodbye to the "but it works on my machine" bugs.

The lockfile looks like something like this:

Yarn lockfile

It contains the exact version numbers of all your dependencies - just like with an npm shrinkwrap file.

License checks

Yarn comes with a handy license checker, which can become really powerful in case you have to check the licenses of all the modules you depend on.

yarn licenses

Potential issues/questions

Yarn is still in its early days, so it’s no surprise that there are some questions arising when you start using it.

What’s going on with the default registry?

By default, the Yarn CLI uses a different registry, and not the original one: https://registry.yarnpkg.com. So far there is no explanation on why it does not use the same registry.

Does Facebook have plans to make incompatible API changes and split the community?

Contributing back to npm?

One the most logical questions that can come up when talking about Yarn is: Why don’t you talk with the CLI team at npm, and work together?

If the problem is speed, I am sure all npm users would like to get those improvements as well.

When we talk about deterministic installs, instead of coming up with a lockfile, the npm-shrinkwrap.json should have been fixed.

Why the strange versioning?

In the world of Node.js and npm, versions starts with 1.0.0.

At the time of writing this article, Yarn is at 0.18.1.

Is something missing to make Yarn stable? Does Yarn simply not follow semver?

npm 4

npm is the default package manager we all know, and it is bundled with each Node.js release since v7.4.

Updating npm

To start using npm version 4, you just have to update your current CLI version:

npm install npm -g  

At the time of writing this article, this command will install npm version 4.1.1, which was released on 12/11/2016. Let's see what changed in this version!

Changes since version 3

  • npm search is now reimplemented to stream results, and sorting is no longer supported,
  • npm scripts no longer prepend the path of the node executable used to run npm before running scripts,
  • prepublish has been deprecated - you should use prepare from now on,
  • npm outdated returns 1 if it finds outdated packages,
  • partial shrinkwraps are no longer supported - the npm-shrinkwrap.json is considered a complete manifest,
  • Node.js 0.10 and 0.12 are no longer supported,
  • npm doctor, which diagnose user's environment and let the user know some recommended solutions if they potentially have any problems related to npm

As you can see, the team at npm was quite busy as well - both npm and Yarn made great progress in the past months.

Conclusion

It is great to see a new, open-source npm client - no doubt, a lot of effort went into making Yarn great!

Hopefully, we will see the improvements of Yarn incorporated into npm as well, so both users will benefit from the improvements of the others.

Yarn vs. npm - Which one to pick?

If you are working on proprietary software, it does not really matter which one you use. With npm, you can use npm-shrinkwrap.js, while you can use yarn.lock with Yarn.

The team at Yarn published a great article on why lockfiles should be committed all the time, I recommend checking it out: https://yarnpkg.com/blog/2016/11/24/lockfiles-for-all


Node.js Interview Questions and Answers (2017 Edition)

Two years ago we published our first article on common Node.js Interview Questions and Answers. Since then a lot of things improved in the JavaScript and Node.js ecosystem, so it was time to update it.

Important Disclaimers

It is never a good practice to judge someone just by questions like these, but these can give you an overview of the person's experience in Node.js.

But obviously, these questions do not give you the big picture of someone's mindset and thinking.

I think that a real-life problem can show a lot more of a candidate's knowledge - so we encourage you to do pair programming with the developers you are going to hire.

Finally and most importantly: we are all humans, so make your hiring process as welcoming as possible. These questions are not meant to be used as "Questions & Answers" but just to drive the conversation.

Node.js Interview Questions for 2017

  • What is an error-first callback?
  • How can you avoid callback hells?
  • What are Promises?
  • What tools can be used to assure consistent style? Why is it important?
  • When should you npm and when yarn?
  • What's a stub? Name a use case!
  • What's a test pyramid? Give an example!
  • What's your favorite HTTP framework and why?
  • How can you secure your HTTP cookies against XSS attacks?
  • How can you make sure your dependencies are safe?

The Answers

What is an error-first callback?

Error-first callbacks are used to pass errors and data as well. You have to pass the error as the first parameter, and it has to be checked to see if something went wrong. Additional arguments are used to pass data.

fs.readFile(filePath, function(err, data) {  
  if (err) {
    // handle the error, the return is important here
    // so execution stops here
    return console.log(err)
  }
  // use the data object
  console.log(data)
})

How can you avoid callback hells?

There are lots of ways to solve the issue of callback hells:

What are Promises?

Promises are a concurrency primitive, first described in the 80s. Now they are part of most modern programming languages to make your life easier. Promises can help you better handle async operations.

An example can be the following snippet, which after 100ms prints out the result string to the standard output. Also, note the catch, which can be used for error handling. Promises are chainable.

new Promise((resolve, reject) => {  
  setTimeout(() => {
    resolve('result')
  }, 100)
})
  .then(console.log)
  .catch(console.error)

What tools can be used to assure consistent style? Why is it important?

When working in a team, consistent style is important, so team members can modify more projects easily, without having to get used to a new style each time.

Also, it can help eliminate programming issues using static analysis.

Tools that can help:

If you’d like to be even more confident, I suggest you to learn and embrace the JavaScript Clean Coding principles as well!

Node.js Monitoring and Debugging from the Experts of RisingStack

Create performant applications using Trace
Learn more

What's a stub? Name a use case!

Stubs are functions/programs that simulate the behaviors of components/modules. Stubs provide canned answers to function calls made during test cases.

An example can be writing a file, without actually doing so.

var fs = require('fs')

var writeFileStub = sinon.stub(fs, 'writeFile', function (path, data, cb) {  
  return cb(null)
})

expect(writeFileStub).to.be.called  
writeFileStub.restore()  

What's a test pyramid? Give an example!

A test pyramid describes the ratio of how many unit tests, integration tests and end-to-end test you should write.

An example for an HTTP API may look like this:

  • lots of low-level unit tests for models (dependencies are stubbed),
  • fewer integration tests, where you check how your models interact with each other (dependencies are not stubbed),
  • less end-to-end tests, where you call your actual endpoints (dependencies are not stubbed).

What's your favorite HTTP framework and why?

There is no right answer for this. The goal here is to understand how deeply one knows the framework she/he uses. Tell what are the pros and cons of picking that framework.

When are background/worker processes useful? How can you handle worker tasks?

Worker processes are extremely useful if you'd like to do data processing in the background, like sending out emails or processing images.

There are lots of options for this like RabbitMQ or Kafka.

How can you secure your HTTP cookies against XSS attacks?

XSS occurs when the attacker injects executable JavaScript code into the HTML response.

To mitigate these attacks, you have to set flags on the set-cookie HTTP header:

  • HttpOnly - this attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via JavaScript.
  • secure - this attribute tells the browser to only send the cookie if the request is being sent over HTTPS.

So it would look something like this: Set-Cookie: sid=<cookie-value>; HttpOnly. If you are using Express, with express-cookie session, it is working by default.

How can you make sure your dependencies are safe?

When writing Node.js applications, ending up with hundreds or even thousands of dependencies can easily happen.
For example, if you depend on Express, you depend on 27 other modules directly, and of course on those dependencies' as well, so manually checking all of them is not an option!

The only option is to automate the update / security audit of your dependencies. For that there are free and paid options:

Node.js Interview Puzzles

The following part of the article is useful if you’d like to prepare for an interview that involves puzzles, or tricky questions.

What's wrong with the code snippet?

new Promise((resolve, reject) => {  
  throw new Error('error')
}).then(console.log)

The Solution

As there is no catch after the then. This way the error will be a silent one, there will be no indication of an error thrown.

To fix it, you can do the following:

new Promise((resolve, reject) => {  
  throw new Error('error')
}).then(console.log).catch(console.error)

If you have to debug a huge codebase, and you don't know which Promise can potentially hide an issue, you can use the unhandledRejection hook. It will print out all unhandled Promise rejections.

process.on('unhandledRejection', (err) => {  
  console.log(err)
})

What's wrong with the following code snippet?

function checkApiKey (apiKeyFromDb, apiKeyReceived) {  
  if (apiKeyFromDb === apiKeyReceived) {
    return true
  }
  return false
}

The Solution

When you compare security credentials it is crucial that you don't leak any information, so you have to make sure that you compare them in fixed time. If you fail to do so, your application will be vulnerable to timing attacks.

But why does it work like that?

V8, the JavaScript engine used by Node.js, tries to optimize the code you run from a performance point of view. It starts comparing the strings character by character, and once a mismatch is found, it stops the comparison operation. So the longer the attacker has right from the password, the more time it takes.

To solve this issue, you can use the npm module called cryptiles.

function checkApiKey (apiKeyFromDb, apiKeyReceived) {  
  return cryptiles.fixedTimeComparison(apiKeyFromDb, apiKeyReceived)
}

What's the output of following code snippet?

Promise.resolve(1)  
  .then((x) => x + 1)
  .then((x) => { throw new Error('My Error') })
  .catch(() => 1)
  .then((x) => x + 1)
  .then((x) => console.log(x))
  .catch(console.error)

The Answer

The short answer is 2 - however with this question I'd recommend asking the candidates to explain what will happen line-by-line to understand how they think. It should be something like this:

  1. A new Promise is created, that will resolve to 1.
  2. The resolved value is incremented with 1 (so it is 2 now), and returned instantly.
  3. The resolved value is discarded, and an error is thrown.
  4. The error is discarded, and a new value (1) is returned.
  5. The execution did not stop after the catch, but before the exception was handled, it continued, and a new, incremented value (2) is returned.
  6. The value is printed to the standard output.
  7. This line won't run, as there was no exception.

A day may work better than questions

Spending at least half a day with your possible next hire is worth more than a thousand of these questions.

Once you do that, you will better understand if the candidate is a good cultural fit for the company and has the right skill set for the job.

Do you miss anything? Let us know!

What was the craziest interview question you had to answer? What's your favorite question / puzzle to ask? Let us know in the comments! :)


Node.js Best Practices - How to Become a Better Developer in 2017

A year ago we wrote a post on How to Become a Better Node.js Developer in 2016 which was a huge success - so we thought now it is time to revisit the topics and prepare for 2017!

In this article, we will go through the most important Node.js best practices for 2017, topics that you should care about and educate yourself in. Let’s start!

Node.js Best Practices for 2017

Use ES2015

Last year we advised you to use ES2015 - however, a lot has changed since.

Back then, Node.js v4 was the LTS version, and it had support for 57% of the ES2015 functionality. A year passed and ES2015 support grew to 99% with Node v6.

If you are on the latest Node.js LTS version you don't need babel anymore to use the whole feature set of ES2015. But even with this said, on the client side you’ll probably still need it!

For more information on which Node.js version supports which ES2015 features, I'd recommend checking out node.green.

Use Promises

Promises are a concurrency primitive, first described in the 80s. Now they are part of most modern programming languages to make your life easier.

Imagine the following example code that reads a file, parses it, and prints the name of the package. Using callbacks, it would look something like this:

fs.readFile('./package.json', 'utf-8', function (err, data) {  
  if (err) {
    return console.log(err)
  }

  try {
    JSON.parse(data)
  } catch (ex) {
    return console.log(ex)
  }
  console.log(data.name)
})

Wouldn't it be nice to rewrite the snippet into something more readable? Promises help you with that:

fs.readFileAsync('./package.json').then(JSON.parse).then((data) => {  
  console.log(data.name)
})
.catch((e) => {
  console.error('error reading/parsing file', e)
})

Of course, for now, the fs API does not have an readFileAsync that returns a Promise. To make it work, you have to wrap it with a module like promisifyAll.

Use the JavaScript Standard Style

When it comes to code style, it is crucial to have a company-wide standard, so when you have to change projects, you can be productive starting from day zero, without having to worry about building the build because of different presets.

At RisingStack we have incorporated the JavaScript Standard Style in all of our projects.

Node.js best practices - The Standard JS Logo

With Standard, there is no decisions to make, no .eslintrc, .jshintrc, or .jscsrc files to manage. It just works. You can find the Standard rules here.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Use Docker - Containers are Production Ready in 2017!

You can think of Docker images as deployment artifacts - Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server.

But why should you start using Docker?

  • it enables you to run your applications in isolation,
  • as a conscience, it makes your deployments more secure,
  • Docker images are lightweight,
  • they enable immutable deployments,
  • and with them, you can mirror production environments locally.

To get started with Docker, head over to the official getting started tutorial. Also, for orchestration we recommend checking out our Kubernetes best practices article.

Monitor your Applications

If something breaks in your Node.js application, you should be the first one to know about it, not your customers.

One of the newer open-source solutions is Prometheus that can help you achieve this. Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. The only downside of Prometheus is that you have to set it up for you and host it for yourself.

If you are looking for on out-of-the-box solution with support, Trace by RisingStack is a great solution developed by us.

Trace will help you with

  • alerting,
  • memory and CPU profiling in production systems,
  • distributed tracing and error searching,
  • performance monitoring,
  • and keeping your npm packages secure!

Node.js Best Practices for 2017 - Use Trace and Profling

Use Messaging for Background Processes

If you are using HTTP for sending messages, then whenever the receiving party is down, all your messages are lost. However, if you pick a persistent transport layer, like a message queue to send messages, you won't have this problem.

If the receiving service is down, the messages will be kept, and can be processed later. If the service is not down, but there is an issue, processing can be retried, so no data gets lost.

An example: you'd like to send out thousands of emails. In this case, you would just have to put some basic information like the target email address and the first name, and a background worker could easily put together the email's content and send them out.

What's really great about this approach is that you can scale it whenever you want, and no traffic will be lost. If you see that there are millions of emails to be sent out, you can add extra workers, and they can consume the very same queue.

You have lots of options for messaging queues:

Use the Latest LTS Node.js version

To get the best of the two worlds (stability and new features) we recommend using the latest LTS (long-term support) version of Node.js. As of writing this article, it is version 6.9.2.

To easily switch Node.js version, you can use nvm. Once you installed it, switching to LTS takes only two commands:

nvm install 6.9.2  
nvm use 6.9.2  


Use Semantic Versioning

We conducted a Node.js Developer Survey a few months ago, which allowed us to get some insights on how people use semantic versioning.

Unfortunately, we found out that only 71% of our respondents uses semantic versioning when publishing/consuming modules. This number should be higher in our opinion - everyone should use it! Why? Because updating packages without semver can easily break Node.js apps.

Node.js Best Practices for 2017 - Semantic versioning survey results

Versioning your application / modules is critical - your consumers must know if a new version of a module is published and what needs to be done on their side to get the new version.

This is where semantic versioning comes into the picture. Given a version number MAJOR.MINOR.PATCH, increment the:

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality (without breaking the API), and
  • PATCH version when you make backwards-compatible bug fixes.

npm also uses SemVer when installing your dependencies, so when you publish modules, always make sure to respect it. Otherwise, you can break others applications!

Secure Your Applications

Securing your users and customers data should be one of your top priorities in 2017. In 2016 alone, hundreds of millions of user accounts were compromised as a result of low security.

To get started with Node.js Security, read our Node.js Security Checklist, which covers topics like:

  • Security HTTP Headers,
  • Brute Force Protection,
  • Session Management,
  • Insecure Dependencies,
  • or Data Validation.

After you’ve embraced the basics, check out my Node Interactive talk on Surviving Web Security with Node.js!

Learn Serverless

Serverless started with the introduction of AWS Lambda. Since then it is growing fast, with a blooming open-source community.

In the next years, serverless will become a major factor for building new applications. If you'd like to stay on the edge, you should start learning it today.

One of the most popular solutions is the Serverless Framework, which helps in deploying AWS Lambda functions.

Attend and Speak at Conferences and Meetups

Attending conferences and meetups are great ways to learn about new trends, use-cases or best practices. Also, it is a great forum to meet new people.

To take it one step forward, I'd like to encourage you to speak at one of these events as well!

As public speaking is tough, and “imagine everyone's naked” is the worst advice, I'd recommend checking out speaking.io for tips on public speaking!

Become a better Node.js developer in 2017

As 2017 will be the year of Node.js, we’d like to help you getting the most out of it!

We just launched a new study program called "Owning Node.js" which helps you to become confident in:

  • Async Programming with Node.js
  • Creating servers with Express
  • Using Databases with Node
  • Project Structuring and building scalable apps

I want to learn more !


If you have any questions about the article, find me in the comments section!

Node.js Tutorial Videos: Debugging, Async, Memory Leaks, CPU Profiling

At RisingStack, we're continuously working on delivering Node.js tutorials to help developers overcome their biggest obstacles, and become even better, week-by-week.

In our recent Node.js survey we've been told that Debugging, understanding/using Async programming, handling callbacks and memory leaks are amongst the greatest pain-points one would face on her/his journey to become a Node Hero.

This is why we came up with the idea of a new video tutorial series called Owning Node.js

In this three-part video series, we're going through all of these topics in a detailed way - by showing and explaining the actual coding process to you.

All of the videos are captioned, so you'll have no problem with understanding what's going on by enabling the subtitles!

So, let's start Owning Node.js together!


Node.js Debugging Made Easy

In this very first video, I'm going to show you how to use the debug module, the built-in debugger, and Chrome DevTools to find and fix issues easily!


Node.js Async Programming Done Right

In the second Node.js tutorial video, I'm going to show you how you can handle asynchronous operations easily, and how you can do performant applications in Node.js using them!

In this 3-part video series @RisingStack explains #nodejs debugging, #async, memory leaks and CPU profiling.

Click To Tweet

So, we are going to take a look at error handling with asynchronous operations, and learn how you can use the async module to handle multiple callbacks at the same time.


CPU and Memory Profiling with Node.js

In the 3rd Node.js tutorial of the series I teach you how to create CPU profiles and Memory Heapdumps, and how to analyze them in the Chrome DevTools profiler. You'll learn detecting memory leaks and bottlenecks easily.


More Node.js tutorials: Announcing the Node Hero Program

I hope these videos made things clearer! If you'd like to keep getting better, I've got good news for you!

We're launching the NODE HERO program as of today, which contains further webinars and screencasts, live-coding sessions and access to our Node.js Debugging and Monitoring solution called Trace.

I highly recommend to check it out, if you'd like to become an even better Node.js developer! See you there!


Node.js Garbage Collection Explained - Node.js at Scale

In this article, you are going to learn how Node.js garbage collection works, what happens in the background when you write code and how memory is freed up for you.

Ancient garbage collector in action

With Node.js at Scale we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

Memory Management in Node.js Applications

Every application needs memory to work properly. Memory management provides ways to dynamically allocate memory chunks for programs when they request it, and free them when they are no longer needed - so that they can be reused.

Application-level memory management can be manual or automatic. The automatic memory management usually involves a garbage collector.

The following code snippet shows how memory can be allocated in C, using manual memory management:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main() {

   char name[20];
   char *description;

   strcpy(name, "RisingStack");

   // memory allocation
   description = malloc( 30 * sizeof(char) );

   if( description == NULL ) {
      fprintf(stderr, "Error - unable to allocate required memory\n");
   } else {
      strcpy( description, "Trace by RisingStack is an APM.");
   }

   printf("Company name = %s\n", name );
   printf("Description: %s\n", description );

   // release memory
   free(description);
}

In manual memory management, it is the responsibility of the developer to free up the unused memory portions. Managing your memory this way can introduce several major bugs to your applications:

  • Memory leaks when the used memory space is never freed up.
  • Wild/dangling pointers appear when an object is deleted, but the pointer is reused. Serious security issues can be introduced when other data structures are overwritten or sensitive information is read.

Luckily for you, Node.js comes with a garbage collector, and you don't need to manually manage memory allocation.

Node.js Monitoring and Debugging from the Experts of RisingStack

Watch out for your garbage collector metrics with Trace
Learn more

The Concept of the Garbage Collector

Garbage collection is a way of managing application memory automatically. The job of the garbage collector (GC) is to reclaim memory occupied by unused objects (garbage). It was first used in LISP in 1959, invented by John McCarthy.

The way how the GC knows that objects are no longer in use is that no other object has references to them.

"A garbage collector was first used in LISP in 1959, invented by John McCarthy." via @RisingStack

Click To Tweet

Memory before the garbage collection

The following diagram shows how the memory can look like if you have objects with references to each other, and with some objects that have no reference to any objects. These are the objects that can be collected by a garbage collector run.

Memory state before Node.js garbage collection

Memory after the garbage collection

Once the garbage collector is run, the objects that are unreachable gets deleted, and the memory space is freed up.

Memory state after Node.js garbage collection

The Advantages of Using a Garbage Collector

  • it prevents wild/dangling pointers bugs,
  • it won't try to free up space that was already freed up,
  • it will protect you from some types of memory leaks.

Of course, using a garbage collector doesn't solve all of your problems, and it’s not a silver bullet for memory management. Let's take a look at things that you should keep in mind!

"Using a garbage collector doesn't solve all of your memory management problems with #nodejs!" via @RisingStack

Click To Tweet


Things to Keep in Mind When Using a Garbage Collector

  • performance impact - in order to decide what can be freed up, the GC consumes computing power
  • unpredictable stalls - modern GC implementations try to avoid "stop-the-world" collections

Node.js Garbage Collection & Memory Management in Practice

The easiest way of learning is by doing - so I am going to show you what happens in the memory with different code snippets.

The Stack

The stack contains local variables and pointers to objects on the heap or pointers defining the control flow of the application.

In the following example, both a and b will be placed on the stack.

function add (a, b) {  
  return a + b
}

add(4, 5)  



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


The Heap

The heap is dedicated to store reference type objects, like strings or objects.

The Car object created in the following snippet is placed on the heap.

function Car (opts) {  
  this.name = opts.name
}

const LightningMcQueen = new Car({name: 'Lightning McQueen'})  

After this, the memory would look something like this:

Node.js Garbage Collection First Step - Object Placed in the Memory Heap

Let's add more cars, and see how our memory would look like!

function Car (opts) {  
  this.name = opts.name
}

const LightningMcQueen = new Car({name: 'Lightning McQueen'})  
const SallyCarrera = new Car({name: 'Sally Carrera'})  
const Mater = new Car({name: 'Mater'})  

Node.js Garbage Collection Second Step - More elements added to the heap

If the GC would run now, nothing could be freed up, as the root has a reference to every object.

Let's make it a little bit more interesting, and add some parts to our cars!

function Engine (power) {  
  this.power = power
}

function Car (opts) {  
  this.name = opts.name
  this.engine = new Engine(opts.power)
}

let LightningMcQueen = new Car({name: 'Lightning McQueen', power: 900})  
let SallyCarrera = new Car({name: 'Sally Carrera', power: 500})  
let Mater = new Car({name: 'Mater', power: 100})  

Node.js Garbage Collection - Assigning values to the objects in the heap

What would happen, if we no longer use Mater, but redefine it and assign some other value, like Mater = undefined?

Node.js Garbage Collection - Redefining values

As a result, the original Mater object cannot be reached from the root object, so on the next garbage collector run it will be freed up:

Node.js Garbage Collection - Freeing up the unreachable object

Now as we understand the basics of what's the expected behaviour of the garbage collector, let's take a look on how it is implemented in V8!

Garbage Collection Methods

In one of our previous articles we dealt with how the Node.js garbage collection methods work, so I strongly recommend reading that article.

Here are the most important things you’ll learn there:

New Space and Old Space

The heap has two main segments, the New Space and the Old Space. The New Space is where new allocations are happening; it is fast to collect garbage here and has a size of ~1-8MBs. Objects living in the New Space are called Young Generation.

The Old Space where the objects that survived the collector in the New Space are promoted into - they are called the Old Generation. Allocation in the Old Space is fast, however collection is expensive so it is infrequently performed .

Young Generation

Usually, ~20% of the Young Generation survives into the Old Generation. Collection in the Old Space will only commence once it is getting exhausted. To do so the V8 engine uses two different collection algorithms.

Scavenge and Mark-Sweep collection

Scavenge collection is fast and runs on the Young Generation, however the slower Mark-Sweep collection runs on the Old Generation.

A Real-Life Example - The Meteor Case-Study

In 2013, the creators of Meteor announced their findings about a memory leak they ran into. The problematic code snippet was the following:

var theThing = null  
var replaceThing = function () {  
  var originalThing = theThing
  var unused = function () {
    if (originalThing)
      console.log("hi")
  }
  theThing = {
    longStr: new Array(1000000).join('*'),
    someMethod: function () {
      console.log(someMessage)
    }
  };
};
setInterval(replaceThing, 1000)  

Well, the typical way that closures are implemented is that every function object has a link to a dictionary-style object representing its lexical scope. If both functions defined inside replaceThing actually used originalThing, it would be important that they both get the same object, even if originalThing gets assigned to over and over, so both functions share the same lexical environment. Now, Chrome's V8 JavaScript engine is apparently smart enough to keep variables out of the lexical environment if they aren't used by any closures - from the Meteor blog.

Further reading:


Next up

In the next chapter of the Node.js at Scale tutorial series we will take a deep dive into writing native Node.js module.

In the meantime, let us know in the comments sections if you have any questions!


Experimenting With async/await in Node.js 7 Nightly

A couple of months ago async/await landed in V8, the JavaScript engine. In the meantime, V8 was updated multiple times in Node.js, and the latest nightly build finally added the V8 version that supports the async/await functionality to Node.js.

Disclaimer: the async/await functionality is only available in the nightly, unstable version of Node.js. Do not use it in production for now.

What's async/await?

First, let's see how you are doing async operations with Promises! This little example shows you how you can fetch data using the Fetch API and Promises.

function getTrace () {  
  return fetch('https://trace.risingstack.com', {
    method: 'get'
  })
}

getTrace()  
  .then()
  .catch()

With async/await, you can await on Promises. This will halt the execution in a non-blocking way - since it waits for the result and returns it. If the promise is not resolved but rejected, the rejected value will be thrown, meaning it can be caught with a try/catch block.

The previous example rewritten with async/await would look something like this:

async function getTrace () {  
  let pageContent
  try {
    pageContent = await fetch('https://trace.risingstack.com', {
      method: 'get'
    })
  } catch (ex) {
    console.error(ex)
  }

  return pageContent
}

getTrace()  
  .then()

For more information on async/await, I recommend reading the following resources:

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Using async/await without transpilers

Installing Node 7

To get started, you have to get the latest build of Node.js first. To do so, head over to the Nightly builds and grab the latest one of the v7.

Once you have downloaded it, unpack it - and you are ready to use it!


If you are using nvm, you can try to install it this way:

NVM_NODEJS_ORG_MIRROR=https://nodejs.org/download/nightly  
nvm install 7  
nvm use 7  

Running files with async/await

Let's create a simple JavaScript file that delays the execution of a function using the setTimeout call, but wrapped with async/await calls.

// app.js
const timeout = function (delay) {  
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      resolve()
    }, delay)
  })
}

async function timer () {  
  console.log('timer started')
  await Promise.resolve(timeout(100));
  console.log('timer finished')
}

timer()  

Once you have this file, you could try running with:

node app.js  

However, it won't work. The async/await support is still behind a flag. To run it, you have to use:

node --harmony-async-await app.js  

Building a web server with async/await

As of Koa v2, Koa supports async functions as middlewares. Previously, it was only possible with transpilers, but from now on it is not the case!

You can simply pass an async function as a Koa middleware:

// app.js
const Koa = require('koa')  
const app = new Koa()

app.use(async (ctx, next) => {  
  const start = new Date()
  await next()
  const ms = new Date() - start
  console.log(`${ctx.method} ${ctx.url} - ${ms}ms`)
})

app.use(ctx => {  
  ctx.body = 'Hello Koa'
})

app.listen(3000)  

Once you have a working server written using Koa, you can simply start it with:

node --harmony-async-await app.js  

When to start using it?

Node.js v8, the next stable version containing the V8 version that enables async/await operations will be released in April 2017. Till that time you can still experiment with it using the unstable Node.js v7 branch.

"#nodejs 8 will enable JavaScript V8 async/await operations. It will be released in April 2017." via @RisingStack

Click To Tweet


npm Best Practices - Node.js at Scale

Node Hero was a Node.js tutorial series focusing on teaching the most essential Node.js best practices, so one can start developing applications using it.

With our new series, called Node.js at Scale, we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

In the first chapter of Node.js at Scale you are going to learn the best practices on using npm as well as tips and tricks that can save you a lot of time on a daily basis.

npm Best Practices

npm install is the most common way of using the npm cli - but it has a lot more to offer! In this chapter of Node.js at Scale you will learn how npm can help you during the full lifecycle of your application - from starting a new project through development and deployment.

#0 Know your npm

Before diving into the topics, let's see some commands that help you with what version of npm you are running, or what commands are available.

npm versions

To get the version of the npm cli you are actively using, you can do the following:

$ npm --version
2.13.2  

npm can return a lot more than just its own version - it can return the version of the current package, the Node.js version you are using and OpenSSL or V8 versions:

$ npm version
{ bleak: '1.0.4',
  npm: '2.15.0',
  ares: '1.10.1-DEV',
  http_parser: '2.5.2',
  icu: '56.1',
  modules: '46',
  node: '4.4.2',
  openssl: '1.0.2g',
  uv: '1.8.0',
  v8: '4.5.103.35',
  zlib: '1.2.8' }

npm help

As most cli toolkits, npm has a great built-in help functionality as well. Description and synopsis are always available. These are essentially man-pages.

$ npm help test
NAME  
       npm-test - Test a package

SYNOPSIS  
           npm test [-- <args>]

           aliases: t, tst

DESCRIPTION  
       This runs a package's "test" script, if one was provided.

       To run tests as a condition of installation, set the npat config to true.

"9 npm best practices - a must-read collection for #nodejs developers" via @RisingStack

Click To Tweet

#1 Start new projects with npm init

When starting a new project npm init can help you a lot by interactively creating a package.json file. This will prompt questions for example on the project's name or description. However, there is a quicker solution!

$ npm init --yes

If you use npm init --yes, it won't prompt for anything, just create a package.json with your defaults. To set these defaults, you can use the following commands:

npm config set init.author.name YOUR_NAME  
npm config set init.author.email YOUR_EMAIL  

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

#2 Finding npm packages

Finding the right packages can be quite challenging - there are hundreds of thousands of modules you can choose from. We know this from experience, and developers participating in our latest Node.js survey also told us that selecting the right npm package is frustrating. Let's try to pick a module that helps us sending HTTP requests!

One website that makes the task a lot easier is npms.io. It shows metrics like quality, popularity and maintenance. These are calculated based on whether a module has outdated dependencies, does it have linters configured, is it covered with tests or when the most recent commit was made.

finding npm packages

#3 Investigate npm packages

Once we picked our module (which will be the request module in our example), we should take a look at the documentation, and check out the open issues to get a better picture of what we are going to require into our application. Don’t forget that the more npm packages you use, the higher the risk of having a vulnerable or malicious one. If you’d like to read more on npm-related security risks, read our related guideline.

If you'd like to open the homepage of the module from the cli you can do:

$ npm home request

To check open issues or the publicly available roadmap (if there’s any), you can try this:

$ npm bugs request

Alternatively, if you'd just like to check a module's git repository, type this:

$ npm repo request

#4 Saving dependencies

Once you found the package you want to include in your project, you have to install and save it. The most common way of doing that is by using npm install request.

If you'd like to take that one step forward and automatically add it to your package.json file, you can do:

$ npm install request --save

npm will save your dependencies with the ^ prefix by default. It means that during the next npm install the latest module without a major version bump will be installed. To change this behaviour, you can:

$ npm config set save-prefix='~'

In case you'd like to save the exact version, you can try:

$ npm config set save-exact true

#5 Lock down dependencies

Even if you save modules with exact version numbers as shown in the previous section, you should be aware that most npm module authors don't. It’s totally fine, they do it to get patches and features automatically.

The situation can easily become problematic for production deployments: It’s possible to have different versions locally then on production, if in the meantime someone just released a new version. The problem will arise, when this new version has some bug which will affect your production system.

To solve this issue, you may want to use npm shrinkwrap. It will generate an npm-shrinkwrap.json that contains not just the exact versions of the modules installed on your machine, but also the version of its dependencies, and so on. Once you have this file in place, npm install will use it to reproduce the same dependency tree.

#6 Check for outdated dependencies

To check for outdated dependencies, npm comes with a built-in tool method the npm outdated command. You have to run in the project's directory which you'd like to check.

$ npm outdated
conventional-changelog    0.5.3   0.5.3   1.1.0  @risingstack/docker-node  
eslint-config-standard    4.4.0   4.4.0   6.0.1  @risingstack/docker-node  
eslint-plugin-standard    1.3.1   1.3.1   2.0.0  @risingstack/docker-node  
rimraf                    2.5.1   2.5.1   2.5.4  @risingstack/docker-node  

Once you maintain more projects, it can become an overwhelming task to keep all your dependencies up to date in each of your projects. To automate this task, you can use Greenkeeper which will automatically send pull requests to your repositories once a dependency is updated.

#7 No devDepenendencies in production

Development dependencies are called development dependencies for a reason - you don't have to install them in production. It makes your deployment artifacts smaller and more secure, as you will have less modules in production which can have security problems.

To install production dependencies only, run this:

$ npm install --production

Alternatively, you can set the NODE_ENV environment variable to production:

$ NODE_ENV=production npm install

"Don't install development dependencies in production" via @RisingStack #nodejs

Click To Tweet

#8 Secure your projects and tokens

In case of using npm with a logged in user, your npm token will be placed in the .npmrc file. As a lot of developers store dotfiles on GitHub, sometimes these tokens get published by accident. Currently, there are thousands of results when searching for the .npmrc file on GitHub, with a huge percentage containing tokens. If you have dotfiles in your repositories, double check that your credentials are not pushed!

Another source of possible security issues are the files which are published to npm by accident. By default npm respects the .gitignore file, and files matching those rules won't be published. However, if you add an .npmignore file, it will override the content of .gitignore - so they won't be merged.

#9 Developing packages

When developing packages locally, you usually want to try them out with one of your projects before publish to npm. This is where npm link comes to the rescue.

What npm link does is that it creates a symlink in the global folder that links to the package where the npm link was executed.

You can run npm link package-name from another location, to create a symbolic link from the globally installed package-name to the /node_modules directory of the current folder.

"Use npm link to test packages locally" via @RisingStack #nodejs

Click To Tweet

Let's see it in action!

# create a symlink to the global folder
/projects/request $ npm link

# link request to the current node_modules
/projects/my-server $ npm link request

# after running this project, the require('request') 
# will include the module from projects/request

Download the whole Learn using npm series as a single pdf

Next up on Node.js at Scale: SemVer and Module Publishing

The next article in the Node.js at Scale series will be a SemVer deep dive with how to publish Node.js modules.

Let me know if you have any questions in the comments!

Case Study: Finding a Node.js Memory Leak in Ghost

At RisingStack, we have been using Ghost from the very beginning, and we love it! As of today we have more than 125 blogposts, with thousands of unique visitors every day, and with 1.5 million pageviews in 2016 overall.

In this post I’m going to share the story of how we discovered a node.js memory leak in ghost@0.9.0, and what role Trace played in the process of detecting and fixing it.

What's Ghost?

Just a blogging platform

Node.js Memory Leak - The Ghost blogging platforms logo

Ghost is a fully open-source publishing platform written entirely in JavaScript. It uses Node.js for the backend, Ember.js for the admin side and Handlebars.js to power the rendering.

Ghost is actively developed - in the last 30 days, it had 10 authors with 66 commits to the master branch. The project's roadmap can be found here: https://trello.com/b/EceUgtCL/ghost-roadmap.

You can open an account at https://ghost.org/ and start writing instantly - or you can host your own version of Ghost, just like we do.

Our Ghost Deployment

Firstly, I'd like to give you a quick overview of how we deploy and use Ghost in production at RisingStack. We use Ghost as a npm module, required into a bigger project, something like this:

// adding Trace to monitor the blog
require('@risingstack/trace')  
const path = require('path')  
const ghost = require('ghost')

ghost({  
  config: path.join(__dirname, 'config.js')
}).then(function (ghostServer) {
  ghostServer.start()
})

Deployments are done using Circle CI which creates a Docker image, pushes it to a Docker registry and deploys it to a staging environment. If everything looks good, the updates are moved to the production blog you are reading now. As a backing database, the blog uses PostgreSQL.

The Node.js Memory Leak

As we like to keep our dependencies up-to-date, we updated to ghost@0.9.0 as soon as it came out. Once we did this, our alerts started to fire, as memory usage started to grow:

Node.js Memory leak in ghost - Trace memory metrics

Luckily, we had alerts set up for memory usage in Trace, which notified us that something is not right. As Trace integrates with Opsgenie and Pagerduty seamlessly, we could have set up alerts for those channels.

Node.js Monitoring and Debugging from the Experts of RisingStack

Set up alerts for your Node.js deployments using Trace/h5> Learn more

We set up alerts for the blog service at 180 and 220 Mb because usually it consumes around 150 Mb when everything’s all right.

Setting up alerts for Node.js memory leaks in Trace

What was even better, is that the alerting was set up in a way that it triggered actions on the collector level. What does this mean? It means, that Trace could create a memory heapdump automatically, without human intervention. Once we started to investigate the issue, the memory heapdump was already in the Profiler section of Trace in the format that's supported by the Google Chrome DevTools.

This enabled us to start looking at the problem instantly, and in a way it happened in the production system, not by trying to reproduce the issue in a local development environment.

Also, as we could take multiple heapdumps from the application itself, we could compare them using the comparison view of the DevTools.

Memory heapshot comparison with Trace and Chrome's Devtools

How to use the comparison view to find the source of a problem? On the picture above, you can see that I compared the heapdump that Trace automatically collected when the alert was triggered with a heapdump that was requested earlier, when everything was ok with the service.

"You can use 2 heapdumps & Chromes comparison view to investigate a Node.js memory leak!" via @RisingStack #nodejs

Click To Tweet

What you have to look for is the #Delta, which shows +772 in our case. This means that at the time our high memory usage alert was triggered the heapdump had an extra 772 objects in it. On the bottom of the picture you can see what were these elements, and that they have something to do with the lodash module.

"The "delta" shows the extra objects in your memory heapdump in case of a leak!" via @RisingStack #nodejs

Click To Tweet

Figuring this out otherwise would be extremely challenging since you’d have to reproduce the issue in a local environment - which is tricky if you don’t even know what caused it.

Should I update? Well..

The final cause of the leak was found by Katharina Irrgang, a core Ghost contributor. To check out the whole thread you can take a look at the GitHub issue: https://github.com/TryGhost/Ghost/issues/7189 . A fix was shipped with 0.10.1. - but updating to it will cause another issue: slow response times.

Slow Response Times

Once we upgraded to the new version, we ran into a new problem - our blog's response time started to degrade. The 95 percentile grew from 100ms to almost 300ms. It instantly triggered our alerts set for response times.

Slow response time graph from Trace

For the slow response time we started to take CPU profiles using Trace. For now, we are still investigating the exact reason, but so far we suspect something is off with how moment.js is used.

CPU profile analysis with Trace

We will update the post once we found why it happens.

Conclusion

I hope this article helped you to figure out what to do in case you're experiencing memory leaks in your Node.js applications. If you'd like to get memory heapdumps automatically in a case like this, connect your services with Trace and enable alerting just like we did earlier.

If you have any additional questions, you can reach me in the comments section!

Node.js Interactive Europe 2016 Recap

Node.js Interactive Europe took place on 15-16 September 2016 in Amsterdam, The Netherlands. The two days were packed with great talks - let's see what was it about!

Amsterdam is the capital of the Kingdom of the Netherlands, with almost 2.5 million people living in the metropolitan area. It was founded as a small fishing village in the 12th century, then became the most important port during the 17 century. Nowadays the canals of Amsterdam and the Defence Line are on the UNESCO World Heritage List.

Node.js Interactive location: Amsterdam

From RisingStack Kev, Peter and Gergely were at the conference - Peter talked about how we killed the monolith under Trace, while Gergely was a panelist at the “Node.js in Europe” and “Node.js Education” panel discussions.

The First Day

The first day was started with keynotes from Mikeal Rodgers, James Snell, Ashley Williams and Bert Belder.

The State of Node.js - Mikeal Rodgers

Mikeal talked about the current state of Node.js adoption, and let us know that Node.js is everywhere. It started out as a server-side framework in 2009, but it is now used in IoT devices, mobiles and on the desktop as well.

Node.js Interactive Amsterdam - Mikeal Rodgers keynote

The project currently has 87 active contributors. Thanks to them, Node.js v7.0.0 will be released in October. This release won't be an LTS version, so it will be only maintained till July 2017, with Node.js v8.0.0 being published in April 2017.

"Great news from Node Interactive: Node.js v8.0.0 is coming in April 2017." via @RisingStack #nodejs

Click To Tweet

From the community side, we learned that there is a new initiative called NodeTogether. NodeTogether’s aim is to improve the diversity of the Node community by bringing people of underrepresented groups together to learn Node.js

Node.js Interactive Amsterdam - Opening Keynote

On the more technical side, we learned that the inspector will become part of the core itself using node --inspect.

How the Event Loop Works - Bert Belder

Next, Bert Belder told us about how the event loop works in his insightful talk. We are excited to see the video of this talk again!.

Node.js Interactive Amsterdam - How the Event Loop works by Bert Belder

(Illustration from Bert's slides)

Event loop and garbage collector metrics - check out Trace by RisingStack!

The Road Forward - Tracy Hinds

Tracy's topic was on how to make Node more inclusive and diverse. To create a sense of welcome, barriers to access and biases must be surfaced. People must feel safe enough to contribute to the Node.js project.

Node.js and Microservices - Yunong Xiao (Netflix) and Peter Marton (RisingStack)

Then came the microservices talks from Yunong J Xiao and Peter Marton.

Yunong (Netflix) talked about Slaying the monolith, while Peter (RisingStack) talked about Breaking down the monolith. Both the talks arrived at the conclusion, that API Gateways are the way to build microservices.

Node.js Interactive Amsterdam - Yunong Xiao from Netflix

Node.js Interactive Amsterdam - Peter Marton on Trace by RisingStack

The Post-Mortem Diagnostics Group

After lunch, Yunong Xiao and Michael Dawson talked about the post-mortem diagnostics group, and nodereport was just announced! Nodereport is an add-on for Node.js, delivered as an NPM module, which provides a human-readable diagnostic summary report, written to a file.

IoT and Containers with Node.js

During the afternoon there were great talks on IoT, Web Standards, Robots and panel talks on Node.js in Europe, IoT and Containers. Containers are popular in the Node.js Community, which was clearly visible from the number of talks on the topic, or from the results of our Node.js Survey we conducted this summer.

Second day

The second day started with a great talk on security, led by Paul Millhiam. (wildworks). Paul is a lead developer at WildeWorks whose job is to keep millions of kids safe by using Node.js. We have learned how data validation is directly linked to security and how code can be secure and stable to work with.

Eugine Isratti’s session described how anyone can use serverless computing to achieve high performance without huge efforts or resources allocation. Eugine’s work at Mitoc Group demonstrated how developers design full/stack application at low cost and low maintenance.

We also saw great talks on blockchain and on how to build offline applications, followed by more container talks and making native add-ons.

The day closed with panel discussions on learning Node.js and best practices on how to contribute to the core.

Node.js Interactive Amsterdam Venue

After the closing keynote, the audience headed to the Beer.js, which was hosted in a nearby pub by Nodeschool Amsterdam. It was a great opportunity to talk with both other attendees as well as the speakers.

Saturday and Sunday

The Collaboration Summit took place during the weekend. The Summit's goal was to get new developers involved in areas through a series of hands-on workshops led by existing contributors. The hands-on workshops led into "sprints" where new developers tackled real problems in the code base with the support of the mentors in attendance. It was great to see new contributors opening pull requests instantly!

Node.js Interactive - Next Up

Node.js Interactive Europe was a great conference to meet with the ones enabling us to use Node.js with ease and learn how others use it and what challenges they face, what problems they solve. There were a lot of great conversations to join to with a friendly and nice atmosphere. We hope to see you there next year!

The next Node Interactive event will take place in Austin from 29 November. For more information and tickets visit http://events.linuxfoundation.org/events/node-interactive.


How Developers use Node.js - Survey Results

RisingStack, the provider of Trace - a next-gen Node.js debugging and performance monitoring solution and silver member of the Node Foundation conducted a survey during 2016 Summer to find out how developers use Node.js and what technologies they prefer with it. This article summarizes the results.

The results show that MongoDB, RabbitMQ, AWS, Jenkins, Docker and Amazon Container Services are the go-to choices for developing, containerizing and shipping Node.js applications.

The survey also let us find out various aspects of developing Node.js and choices for async control flow, debugging, continuous integration or finding packages. The results also tell Node developers major pain-point: debugging.

The survey was open for 35 days from 11 July until 15 August 2016. During this period, 1126 Node.js developers completed it. 55% of them have more than two years of Node.js experience, while 26% uses Node between one and two years. 20% works at a company that is publicly traded, 7% at a Fortune 500 enterprise.

Technologies used with Node.js

MongoDB became the go-to database

Node.js Survey - What databases are you using? MongoDB wins.

According to the results, MongoDB is clearly the go-to database for Node.js developers. Roughly ⅔ of our respondents claimed that they use MongoDB with their Node.js applications. It's also worth noticing that the popularity of Redis is massively increasing with the experience of Node engineers. This trend is also true in the case of PostgreSQL and ElasticSearch.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Node.js Survey - Database usage and developer experience

"The popularity of Redis and PostgreSQL is heavily increasing with developer XP" via @RisingStack #nodejs #survey

Click To Tweet


Redis leads as a caching solution, but a lot of developers still don’t do it

Node.js Survey - What do you use for caching? Redis wins.

Half of our respondents said that they are using Redis for caching, but a staggering 45% stated that they don’t use any. Cross referencing the answers with developer experience allows us to see that the popularity of Redis is quite high amongst long-time Node users, compared to engineers with less than one year of Node.js experience.

Node.js Survey - Caching usage and developer experience

The popularity of messaging systems is still low

According to our survey, 58% of Node.js developers don't use any messaging systems. This means that either developers are rarely using messaging in their microservices system, they use REST API-s or they don’t have a sophisticated system in place.

Node.js Survey - What messaging systems are you using? RabbitMQ wins.

Those who use Messaging systems answered that they mostly use RabbitMQ (24% of all respondents). If we only investigate the responses of people who use messaging systems, RabbitMQ beats the rest of the existing solutions by far.

Node.js Survey - Messaging system usage

Node.js apps are most likely running on AWS

According to our survey, 43% of Node.js developers use AWS for running their applications, but running an own datacenter is popular as well (34%) especially amongst enterprises (nearly 50% of them has own datacenters) - but this is no surprise.

Node.js Survey - Where do you run your Node.js apps? AWS.

"43% of Node.js developers use AWS for running their applications" via @RisingStack #nodejs #survey @awscloud

Click To Tweet

What’s interesting though is that Heroku and DigitalOcean are competing neck and neck to become the second biggest cloud platform for Node.js. According to our data, DigitalOcean is more popular with smaller companies (under 50) while Heroku stays strong as an enterprise solution as well.

Node.js Survey - Running apps and company size

Docker dominates in the Node community

Currently, Docker containers are the go-to solution for most of Node.js developers (47% of all respondents claimed to use it - which is 73% of all container tech users in the survey). Docker seems to be equally popular within all company sizes - but advanced developers appear to be using it much more (the ones with over one year’s of experience).

Node.js Survey - What container techs or VMs are you using? Docker.

"Docker containers are the go-to solution for most of Node.js developers." via @RisingStack #nodejs #survey @docker

Click To Tweet

64% of the respondents said that they use some container technology - which means that the popularity of containers rose since the last major Node.js survey from 45% with a significant 20% increase since January 2016.

Node.js Survey - Container techs and developer experience.

Amazon Container Service is the first choice for running containers

Node.js Survey - How do you run your containers? Amazon Container Service wins.

While Amazon Container Service leads as the choice of running containers with Node.js, it’s worth noting that Kubernetes is already on 25% according to our survey, and it seems to be popular especially with enterprise Node.js developers.

Node.js Survey - Running Amazon and Kubernetes with company size

Node.js development

Configuration files are being used more often than environmental variables

The majority of Node developers (59% vs. 38%) prefer config files over credentials. Only 29 respondents (3%) stated that they use both.

Node.js Survey - Environment variables or config files? Config files wins.

Using only configuration files suggests a possible security problem since it implies that credentials are stored in the repositories. If you have the credentials to productions systems in GitHub, you can quickly run into trouble with rogue developers.

Using Env vars is highly recommended for secrets - while developers can still use config files in general.

Promises lead with async control flow

In Node.js - most of the core libraries are working with callbacks. The results show that Node.js users are leaning towards using promises right now.

Node.js Survey - What do you use for async control flow? Promises wins.

Around half a year ago there was a pull-request in the core Node.js repository asking for async functions to return a native Promise. The answer for this was: “A Promises API doesn’t make sense for core right now because it's too early in the evolution of V8-based promises and their relationship to other ES* features. There is tiny interest within the TC in exploring this in the core in the short-term.”

Maybe it’s time to revisit the issue - since the demand is present.

Developers trust the console.log for debugging

Console.log is leading the race amongst other debugging solutions like the Node Inspector, the Built-in debugger and the debug module. Around ¾ of Node developers use it for finding errors in their applications - while much-sophisticated solutions are available as well.

Node.js Survey - How do you debug your applications? Using the console.log

"Around ¾ of Node developers use console.log to find errors in their applications." via @RisingStack #nodejs #survey

Click To Tweet

A closer look at the data lets us know that more experienced developers are leaning towards the Node Inspector and the Debug Module as well.

Node.js Survey - Debugging applications and developer experience

APM’s are still quite unpopular in the Node.js community

According to the responses in our survey, only ¼ of Node.js developers use APMs - application performance monitoring tools - to identify issues in their applications. Although, emerging trends in the dataset suggest that APM usage grows with company size and developer experience.

Node.js Survey - How do you identify issues in your app? Using logs.

SaaS CI’s still have a low market share in the Node community

According to the answers in our survey, using shell scripts is the most popular way of pushing code to staging or production environments - but Jenkins clearly wins among continuous delivery and integration platforms so far, and is becoming more popular as company size increases.

Node.js Survey - What do you use to push code or containers? Shell scripts win.

Node.js developers rarely update dependencies

Frequently updating dependencies is highly recommended with Node.js applications - since around 15% of npm packages carry a known vulnerability & 76% of Node shops use vulnerable dependencies according to a recent survey.

Node.js Survey - How often do you update dependencies? Less frequently than a month.

Updating dependencies less frequently than every week exposes applications to severe attacks all the time. According to our survey, 45% of Node.js developers update dependencies less frequently than a month, and 27% of them update dependencies month-by-month. Only 28% answered that they update dependencies at least every week.

These numbers correlate neither with company size nor with developer experience.

Node.js developers Google for their packages

According to our survey, the majority of developers use Google to find packages and decide which one of them they should use. Although the popularity of the npmjs.org/npms.io search platforms is 56% amongst our respondents, the data shows that it goes up to almost 70% for the demographic group of experienced (more than four years of Node development)! Preference increases with experience in this case.

Node.js Survey - How do you decide what package to pick? People mostly Google for them.

Junior Node.js developers don’t know what semantic versioning is

Although 71% of our respondents uses semantic versioning when publishing/consuming modules, this number should be higher in our opinion. Everyone should use semantic versioning since npm works with semver! Updating packages without using it can easily break Node.js applications.

Node.js Survey - Do you use semantic versioning? Mostly yes.

If we dig deeper in the dataset we can see that around half of the Node developers with less than a year experience don’t know what semver is or don’t use it - while advanced developers are embracing it on a much higher level.

Node.js teams introduce new tools and technologies very fast

According to our survey 35% of Node developers can introduce new tech/tools/product to their companies in a few days, and 29% in just a few weeks.

Node.js Survey - How much time is needed to introduce new technologies, tools or products to your company? A few weeks.

"35% of Node devs can introduce new tech/tools to their companies in a few days." via @RisingStack #nodejs #survey

Click To Tweet

If we investigate the data more thoroughly, a not-so-surprising pattern emerges which lets us know that the time needed to introduce new tech/tools is gradually increasing with the size of a company.

Debugging is the most severe pain-point for developing with Node.js

We also asked Node developers about what's their biggest pain points regarding development. The top answers were:

  • Debugging / Profiling / Performance Monitoring
  • Callbacks and Callback hell
  • Understanding Async programming
  • Dependency management
  • Lack of conventions/best practices
  • Structuring
  • Bad documentation
  • Finding the right packages

"The biggest pain point of Node.js development is debugging." via @RisingStack #nodejs #survey

Click To Tweet

Conclusion

Developing Node.js is still an interesting and ever-changing experience. We'd like to thank for the the engineers who took their time with answering to our questions, and we hope that the information presented in the article is valuable for the whole Node community.

The full dataset is going to be released and linked in this blogpost in a few days.