Trace Node.js Monitoring Start my free trial!

tutorial

Node.js Interview Questions and Answers (2017 Edition)

Node.js Interview Questions and Answers (2017 Edition)

Two years ago we published our first article on common Node.js Interview Questions and Answers. Since then a lot of things improved in the JavaScript and Node.js ecosystem, so it was time to update it.

Important Disclaimers

It is never a good practice to judge someone just by questions like these, but these can give you an overview of the person's experience in Node.js.

But obviously, these questions do not give you the big picture of someone's mindset and thinking.

I think that a real-life problem can show a lot more of a candidate's knowledge - so we encourage you to do pair programming with the developers you are going to hire.

Finally and most importantly: we are all humans, so make your hiring process as welcoming as possible. These questions are not meant to be used as "Questions & Answers" but just to drive the conversation.

Node.js Interview Questions for 2017

  • What is an error-first callback?
  • How can you avoid callback hells?
  • What are Promises?
  • What tools can be used to assure consistent style? Why is it important?
  • When should you npm and when yarn?
  • What's a stub? Name a use case!
  • What's a test pyramid? Give an example!
  • What's your favorite HTTP framework and why?
  • How can you secure your HTTP cookies against XSS attacks?
  • How can you make sure your dependencies are safe?

The Answers

What is an error-first callback?

Error-first callbacks are used to pass errors and data as well. You have to pass the error as the first parameter, and it has to be checked to see if something went wrong. Additional arguments are used to pass data.

fs.readFile(filePath, function(err, data) {  
  if (err) {
    // handle the error, the return is important here
    // so execution stops here
    return console.log(err)
  }
  // use the data object
  console.log(data)
})

How can you avoid callback hells?

There are lots of ways to solve the issue of callback hells:

What are Promises?

Promises are a concurrency primitive, first described in the 80s. Now they are part of most modern programming languages to make your life easier. Promises can help you better handle async operations.

An example can be the following snippet, which after 100ms prints out the result string to the standard output. Also, note the catch, which can be used for error handling. Promises are chainable.

new Promise((resolve, reject) => {  
  setTimeout(() => {
    resolve('result')
  }, 100)
})
  .then(console.log)
  .catch(console.error)

What tools can be used to assure consistent style? Why is it important?

When working in a team, consistent style is important, so team members can modify more projects easily, without having to get used to a new style each time.

Also, it can help eliminate programming issues using static analysis.

Tools that can help:

If you’d like to be even more confident, I suggest you to learn and embrace the JavaScript Clean Coding principles as well!



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


What's a stub? Name a use case!

Stubs are functions/programs that simulate the behaviors of components/modules. Stubs provide canned answers to function calls made during test cases.

An example can be writing a file, without actually doing so.

var fs = require('fs')

var writeFileStub = sinon.stub(fs, 'writeFile', function (path, data, cb) {  
  return cb(null)
})

expect(writeFileStub).to.be.called  
writeFileStub.restore()  

What's a test pyramid? Give an example!

A test pyramid describes the ratio of how many unit tests, integration tests and end-to-end test you should write.

An example for an HTTP API may look like this:

  • lots of low-level unit tests for models (dependencies are stubbed),
  • fewer integration tests, where you check how your models interact with each other (dependencies are not stubbed),
  • less end-to-end tests, where you call your actual endpoints (dependencies are not stubbed).

What's your favorite HTTP framework and why?

There is no right answer for this. The goal here is to understand how deeply one knows the framework she/he uses. Tell what are the pros and cons of picking that framework.

When are background/worker processes useful? How can you handle worker tasks?

Worker processes are extremely useful if you'd like to do data processing in the background, like sending out emails or processing images.

There are lots of options for this like RabbitMQ or Kafka.

How can you secure your HTTP cookies against XSS attacks?

XSS occurs when the attacker injects executable JavaScript code into the HTML response.

To mitigate these attacks, you have to set flags on the set-cookie HTTP header:

  • HttpOnly - this attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via JavaScript.
  • secure - this attribute tells the browser to only send the cookie if the request is being sent over HTTPS.

So it would look something like this: Set-Cookie: sid=<cookie-value>; HttpOnly. If you are using Express, with express-cookie session, it is working by default.

How can you make sure your dependencies are safe?

When writing Node.js applications, ending up with hundreds or even thousands of dependencies can easily happen.
For example, if you depend on Express, you depend on 27 other modules directly, and of course on those dependencies' as well, so manually checking all of them is not an option!

The only option is to automate the update / security audit of your dependencies. For that there are free and paid options:

Node.js Interview Puzzles

The following part of the article is useful if you’d like to prepare for an interview that involves puzzles, or tricky questions.

What's wrong with the code snippet?

new Promise((resolve, reject) => {  
  throw new Error('error')
}).then(console.log)

The Solution

As there is no catch after the then. This way the error will be a silent one, there will be no indication of an error thrown.

To fix it, you can do the following:

new Promise((resolve, reject) => {  
  throw new Error('error')
}).then(console.log).catch(console.error)

If you have to debug a huge codebase, and you don't know which Promise can potentially hide an issue, you can use the unhandledRejection hook. It will print out all unhandled Promise rejections.

process.on('unhandledRejection', (err) => {  
  console.log(err)
})

What's wrong with the following code snippet?

function checkApiKey (apiKeyFromDb, apiKeyReceived) {  
  if (apiKeyFromDb === apiKeyReceived) {
    return true
  }
  return false
}

The Solution

When you compare security credentials it is crucial that you don't leak any information, so you have to make sure that you compare them in fixed time. If you fail to do so, your application will be vulnerable to timing attacks.

But why does it work like that?

V8, the JavaScript engine used by Node.js, tries to optimize the code you run from a performance point of view. It starts comparing the strings character by character, and once a mismatch is found, it stops the comparison operation. So the longer the attacker has right from the password, the more time it takes.

To solve this issue, you can use the npm module called cryptiles.

function checkApiKey (apiKeyFromDb, apiKeyReceived) {  
  return cryptiles.fixedTimeComparison(apiKeyFromDb, apiKeyReceived)
}

What's the output of following code snippet?

Promise.resolve(1)  
  .then((x) => x + 1)
  .then((x) => { throw new Error('My Error') })
  .catch(() => 1)
  .then((x) => x + 1)
  .then((x) => console.log(x))
  .catch(console.error)

The Answer

The short answer is 2 - however with this question I'd recommend asking the candidates to explain what will happen line-by-line to understand how they think. It should be something like this:

  1. A new Promise is created, that will resolve to 1.
  2. The resolved value is incremented with 1 (so it is 2 now), and returned instantly.
  3. The resolved value is discarded, and an error is thrown.
  4. The error is discarded, and a new value (1) is returned.
  5. The execution did not stop after the catch, but before the exception was handled, it continued, and a new, incremented value (2) is returned.
  6. The value is printed to the standard output.
  7. This line won't run, as there was no exception.

A day may work better than questions

Spending at least half a day with your possible next hire is worth more than a thousand of these questions.

Once you do that, you will better understand if the candidate is a good cultural fit for the company and has the right skill set for the job.

Do you miss anything? Let us know!

What was the craziest interview question you had to answer? What's your favorite question / puzzle to ask? Let us know in the comments! :)


Node.js Best Practices - How to Become a Better Developer in 2017

Node.js Best Practices - How to Become a Better Developer in 2017

A year ago we wrote a post on How to Become a Better Node.js Developer in 2016 which was a huge success - so we thought now it is time to revisit the topics and prepare for 2017!

In this article, we will go through the most important Node.js best practices for 2017, topics that you should care about and educate yourself in. Let’s start!

Node.js Best Practices for 2017

Use ES2015

Last year we advised you to use ES2015 - however, a lot has changed since.

Back then, Node.js v4 was the LTS version, and it had support for 57% of the ES2015 functionality. A year passed and ES2015 support grew to 99% with Node v6.

If you are on the latest Node.js LTS version you don't need babel anymore to use the whole feature set of ES2015. But even with this said, on the client side you’ll probably still need it!

For more information on which Node.js version supports which ES2015 features, I'd recommend checking out node.green.

Use Promises

Promises are a concurrency primitive, first described in the 80s. Now they are part of most modern programming languages to make your life easier.

Imagine the following example code that reads a file, parses it, and prints the name of the package. Using callbacks, it would look something like this:

fs.readFile('./package.json', 'utf-8', function (err, data) {  
  if (err) {
    return console.log(err)
  }

  try {
    JSON.parse(data)
  } catch (ex) {
    return console.log(ex)
  }
  console.log(data.name)
})

Wouldn't it be nice to rewrite the snippet into something more readable? Promises help you with that:

fs.readFileAsync('./package.json').then(JSON.parse).then((data) => {  
  console.log(data.name)
})
.catch((e) => {
  console.error('error reading/parsing file', e)
})

Of course, for now, the fs API does not have an readFileAsync that returns a Promise. To make it work, you have to wrap it with a module like promisifyAll.

Use the JavaScript Standard Style

When it comes to code style, it is crucial to have a company-wide standard, so when you have to change projects, you can be productive starting from day zero, without having to worry about building the build because of different presets.

At RisingStack we have incorporated the JavaScript Standard Style in all of our projects.

Node.js best practices - The Standard JS Logo

With Standard, there is no decisions to make, no .eslintrc, .jshintrc, or .jscsrc files to manage. It just works. You can find the Standard rules here.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Use Docker - Containers are Production Ready in 2017!

You can think of Docker images as deployment artifacts - Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server.

But why should you start using Docker?

  • it enables you to run your applications in isolation,
  • as a conscience, it makes your deployments more secure,
  • Docker images are lightweight,
  • they enable immutable deployments,
  • and with them, you can mirror production environments locally.

To get started with Docker, head over to the official getting started tutorial. Also, for orchestration we recommend checking out our Kubernetes best practices article.

Monitor your Applications

If something breaks in your Node.js application, you should be the first one to know about it, not your customers.

One of the newer open-source solutions is Prometheus that can help you achieve this. Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud. The only downside of Prometheus is that you have to set it up for you and host it for yourself.

If you are looking for on out-of-the-box solution with support, Trace by RisingStack is a great solution developed by us.

Trace will help you with

  • alerting,
  • memory and CPU profiling in production systems,
  • distributed tracing and error searching,
  • performance monitoring,
  • and keeping your npm packages secure!

Node.js Best Practices for 2017 - Use Trace and Profling

Use Messaging for Background Processes

If you are using HTTP for sending messages, then whenever the receiving party is down, all your messages are lost. However, if you pick a persistent transport layer, like a message queue to send messages, you won't have this problem.

If the receiving service is down, the messages will be kept, and can be processed later. If the service is not down, but there is an issue, processing can be retried, so no data gets lost.

An example: you'd like to send out thousands of emails. In this case, you would just have to put some basic information like the target email address and the first name, and a background worker could easily put together the email's content and send them out.

What's really great about this approach is that you can scale it whenever you want, and no traffic will be lost. If you see that there are millions of emails to be sent out, you can add extra workers, and they can consume the very same queue.

You have lots of options for messaging queues:

Use the Latest LTS Node.js version

To get the best of the two worlds (stability and new features) we recommend using the latest LTS (long-term support) version of Node.js. As of writing this article, it is version 6.9.2.

To easily switch Node.js version, you can use nvm. Once you installed it, switching to LTS takes only two commands:

nvm install 6.9.2  
nvm use 6.9.2  


Use Semantic Versioning

We conducted a Node.js Developer Survey a few months ago, which allowed us to get some insights on how people use semantic versioning.

Unfortunately, we found out that only 71% of our respondents uses semantic versioning when publishing/consuming modules. This number should be higher in our opinion - everyone should use it! Why? Because updating packages without semver can easily break Node.js apps.

Node.js Best Practices for 2017 - Semantic versioning survey results

Versioning your application / modules is critical - your consumers must know if a new version of a module is published and what needs to be done on their side to get the new version.

This is where semantic versioning comes into the picture. Given a version number MAJOR.MINOR.PATCH, increment the:

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality (without breaking the API), and
  • PATCH version when you make backwards-compatible bug fixes.

npm also uses SemVer when installing your dependencies, so when you publish modules, always make sure to respect it. Otherwise, you can break others applications!

Secure Your Applications

Securing your users and customers data should be one of your top priorities in 2017. In 2016 alone, hundreds of millions of user accounts were compromised as a result of low security.

To get started with Node.js Security, read our Node.js Security Checklist, which covers topics like:

  • Security HTTP Headers,
  • Brute Force Protection,
  • Session Management,
  • Insecure Dependencies,
  • or Data Validation.

After you’ve embraced the basics, check out my Node Interactive talk on Surviving Web Security with Node.js!

Learn Serverless

Serverless started with the introduction of AWS Lambda. Since then it is growing fast, with a blooming open-source community.

In the next years, serverless will become a major factor for building new applications. If you'd like to stay on the edge, you should start learning it today.

One of the most popular solutions is the Serverless Framework, which helps in deploying AWS Lambda functions.

Attend and Speak at Conferences and Meetups

Attending conferences and meetups are great ways to learn about new trends, use-cases or best practices. Also, it is a great forum to meet new people.

To take it one step forward, I'd like to encourage you to speak at one of these events as well!

As public speaking is tough, and “imagine everyone's naked” is the worst advice, I'd recommend checking out speaking.io for tips on public speaking!

Become a better Node.js developer in 2017

As 2017 will be the year of Node.js, we’d like to help you getting the most out of it!

We just launched a new study program called "Owning Node.js" which helps you to become confident in:

  • Async Programming with Node.js
  • Creating servers with Express
  • Using Databases with Node
  • Project Structuring and building scalable apps

I want to learn more !


If you have any questions about the article, find me in the comments section!

JavaScript Clean Coding Best Practices - Node.js at Scale

JavaScript Clean Coding Best Practices - Node.js at Scale

Writing clean code is what you must know and do in order to call yourself a professional developer. There is no reasonable excuse for doing anything less than your best.

In this blog post, we will cover general clean coding principles for naming and using variables & functions, as well as some JavaScript specific clean coding best practices.

“Even bad code can function. But if the code isn’t clean, it can bring a development organization to its knees.” — Robert C. Martin (Uncle Bob)


Node.js at Scale is a collection of articles focusing on the needs of companies with bigger Node.js installations and advanced Node developers. Chapters:


First of all, what does clean coding mean?

Clean coding means that in the first place you write code for your later self and for your co-workers and not for the machine.

Your code must be easily understandable for humans.

"Write code for your later self and for your co-workers in the first place - not for the machine." via @RisingStack

Click To Tweet

You know you are working on a clean code when each routine you read turns out to be pretty much what you expected.

JavaSctipr Clean Coding: The only valid measurement of code quality is WTFs/minute

JavaScript Clean Coding Best Practices

Now that we know what every developer should aim for, let’s go through the best practices!

How should I name my variables?

Use intention-revealing names and don't worry if you have long variable names instead of saving a few keyboard strokes.

If you follow this practice, your names become searchable, which helps a lot when you do refactors or you are just looking for something.

// DON'T
let d  
let elapsed  
const ages = arr.map((i) => i.age)

// DO
let daysSinceModification  
const agesOfUsers = users.map((user) => user.age)  

Also, make meaningful distinctions and don't add extra, unnecessary nouns to the variable names, like its type (hungarian notation).

// DON'T
let nameString  
let theUsers

// DO
let name  
let users  

Make your variable names easy to pronounce, because for the human mind it takes less effort to process.

When you are doing code reviews with your fellow developers, these names are easier to reference.

// DON'T
let fName, lName  
let cntr

let full = false  
if (cart.size > 100) {  
  full = true
}

// DO
let firstName, lastName  
let counter

const MAX_CART_SIZE = 100  
// ...
const isFull = cart.size > MAX_CART_SIZE  

In short, don't cause extra mental mapping with your names.

How should I write my functions?

Your functions should do one thing only on one level of abstraction.

Functions should do one thing. They should do it well. They should do it only. — Robert C. Martin (Uncle Bob)

// DON'T
function getUserRouteHandler (req, res) {  
  const { userId } = req.params
  // inline SQL query
  knex('user')
    .where({ id: userId })
    .first()
    .then((user) => res.json(user))
}

// DO
// User model (eg. models/user.js)
const tableName = 'user'  
const User = {  
  getOne (userId) {
    return knex(tableName)
      .where({ id: userId })
      .first()
  }
}

// route handler (eg. server/routes/user/get.js)
function getUserRouteHandler (req, res) {  
  const { userId } = req.params
  User.getOne(userId)
    .then((user) => res.json(user))
}

After you wrote your functions properly, you can test how well you did with CPU profiling - which helps you to find bottlenecks.

Use long, descriptive names

A function name should be a verb or a verb phrase, and it needs to communicate its intent, as well as the order and intent of the arguments.

A long descriptive name is way better than a short, enigmatic name or a long descriptive comment.

// DON'T
/**
 * Invite a new user with its email address
 * @param {String} user email address
 */
function inv (user) { /* implementation */ }

// DO
function inviteUser (emailAddress) { /* implementation */ }  


Avoid long argument list

Use a single object parameter and destructuring assignment instead. It also makes handling optional parameters much easier.

// DON'T
function getRegisteredUsers (fields, include, fromDate, toDate) { /* implementation */ }  
getRegisteredUsers(['firstName', 'lastName', 'email'], ['invitedUsers'], '2016-09-26', '2016-12-13')

// DO
function getRegisteredUsers ({ fields, include, fromDate, toDate }) { /* implementation */ }  
getRegisteredUsers({  
  fields: ['firstName', 'lastName', 'email'],
  include: ['invitedUsers'],
  fromDate: '2016-09-26',
  toDate: '2016-12-13'
})



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Reduce side effects

Use pure functions without side effects, whenever you can. They are really easy to use and test.

// DON'T
function addItemToCart (cart, item, quantity = 1) {  
  const alreadyInCart = cart.get(item.id) || 0
  cart.set(item.id, alreadyInCart + quantity)
  return cart
}

// DO
// not modifying the original cart
function addItemToCart (cart, item, quantity = 1) {  
  const cartCopy = new Map(cart)
  const alreadyInCart = cartCopy.get(item.id) || 0
  cartCopy.set(item.id, alreadyInCart + quantity)
  return cartCopy
}

// or by invert the method location
// you can expect that the original object will be mutated
// addItemToCart(cart, item, quantity) -> cart.addItem(item, quantity)
const cart = new Map()  
Object.assign(cart, {  
  addItem (item, quantity = 1) {
    const alreadyInCart = this.get(item.id) || 0
    this.set(item.id, alreadyInCart + quantity)
    return this
  }
})


Organize your functions in a file according to the stepdown rule

Higher level functions should be on top and lower levels below. It makes it natural to read the source code.

// DON'T
// "I need the full name for something..."
function getFullName (user) {  
  return `${user.firstName} ${user.lastName}`
}

function renderEmailTemplate (user) {  
  // "oh, here"
  const fullName = getFullName(user)
  return `Dear ${fullName}, ...`
}

// DO
function renderEmailTemplate (user) {  
  // "I need the full name of the user"
  const fullName = getFullName(user)
  return `Dear ${fullName}, ...`
}

// "I use this for the email template rendering"
function getFullName (user) {  
  return `${user.firstName} ${user.lastName}`
}


Query or modification

Functions should either do something (modify) or answer something (query), but not both.


Everyone likes to write JavaScript differently, what to do?

As JavaScript is dynamic and loosely typed, it is especially prone to programmer errors.

Use project or company wise linter rules and formatting style.

The stricter the rules, the less effort will go into pointing out bad formatting in code reviews. It should cover things like consistent naming, indentation size, whitespace placement and even semicolons.

"The stricter the linter rules, the less effort needed to point out bad formatting in code reviews." by @RisingStack

Click To Tweet

The standard JS style is quite nice to start with, but in my opinion, it isn't strict enough. I can agree most of the rules in the Airbnb style.

How to write nice async code?

Use Promises whenever you can.

Promises are natively available from Node 4. Instead of writing nested callbacks, you can have chainable Promise calls.

// AVOID
asyncFunc1((err, result1) => {  
  asyncFunc2(result1, (err, result2) => {
    asyncFunc3(result2, (err, result3) => {
      console.lor(result3)
    })
  })
})

// PREFER
asyncFuncPromise1()  
  .then(asyncFuncPromise2)
  .then(asyncFuncPromise3)
  .then((result) => console.log(result))
  .catch((err) => console.error(err))

Most of the libraries out there have both callback and promise interfaces, prefer the latter. You can even convert callback APIs to promise based one by wrapping them using packages like es6-promisify.

// AVOID
const fs = require('fs')

function readJSON (filePath, callback) {  
  fs.readFile(filePath, (err, data) => {
    if (err) {
      return callback(err)
    }

    try {
      callback(null, JSON.parse(data))
    } catch (ex) {
      callback(ex)
    }
  })
}

readJSON('./package.json', (err, pkg) => { console.log(err, pkg) })

// PREFER
const fs = require('fs')  
const promisify = require('es6-promisify')

const readFile = promisify(fs.readFile)  
function readJSON (filePath) {  
  return readFile(filePath)
    .then((data) => JSON.parse(data))
}

readJSON('./package.json')  
  .then((pkg) => console.log(pkg))
  .catch((err) => console.error(err))

The next step would be to use async/await (≥ Node 7) or generators with co (≥ Node 4) to achieve synchronous like control flows for your asynchronous code.

const request = require('request-promise-native')

function getExtractFromWikipedia (title) {  
  return request({
    uri: 'https://en.wikipedia.org/w/api.php',
    qs: {
      titles: title,
      action: 'query',
      format: 'json',
      prop: 'extracts',
      exintro: true,
      explaintext: true
    },
    method: 'GET',
    json: true
  })
    .then((body) => Object.keys(body.query.pages).map((key) => body.query.pages[key].extract))
    .then((extracts) => extracts[0])
    .catch((err) => {
      console.error('getExtractFromWikipedia() error:', err)
      throw err
    })
} 

// PREFER
async function getExtractFromWikipedia (title) {  
  let body
  try {
    body = await request({ /* same parameters as above */ })
  } catch (err) {
    console.error('getExtractFromWikipedia() error:', err)
    throw err
  }

  const extracts = Object.keys(body.query.pages).map((key) => body.query.pages[key].extract)
  return extracts[0]
}

// or
const co = require('co')

const getExtractFromWikipedia = co.wrap(function * (title) {  
  let body
  try {
    body = yield request({ /* same parameters as above */ })
  } catch (err) {
    console.error('getExtractFromWikipedia() error:', err)
    throw err
  }

  const extracts = Object.keys(body.query.pages).map((key) => body.query.pages[key].extract)
  return extracts[0]
})

getExtractFromWikipedia('Robert Cecil Martin')  
  .then((robert) => console.log(robert))


How should I write performant code?

In the first place, you should write clean code, then use profiling to find performance bottlenecks.

Never try to write performant and smart code first, instead, optimize the code when you need to and refer to true impact instead of micro-benchmarks.

"Write clean code first and optimize it when you need to. Refer to true impact instead of micro-benchmarks!"

Click To Tweet

Although, there are some straightforward scenarios like eagerly initializing what you can (eg. joi schemas in route handlers, which would be used in every request and adds serious overhead if recreated every time) and using asynchronous instead of blocking code.

Next up in Node.js at Scale

In the next episode of this series, we’ll discuss advanced Node.js async best practices and avoiding the callback hell!

If you have any questions regarding clean coding, don’t hesitate and let me know in the comments!


Understanding the Node.js Event Loop - Node.js at Scale

Understanding the Node.js Event Loop - Node.js at Scale

This article helps you to understand how the Node.js event loop works, and how you can leverage it to build fast applications. We’ll also discuss the most common problems you might encounter, and the solutions for them.

With Node.js at Scale we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

Upcoming chapters for the Node.js at Scale series:


The problem

Most of the backends behind websites don’t need to do complicated computations. Our programs spend most of their time waiting for the disk to read & write , or waiting for the wire to transmit our message and send back the answer.

IO operations can be orders of magnitude slower than data processing. Take this for example: SSD-s can have a read speed of 200-730 MB/s - at least a high-end one. Reading just one kilobyte of data would take 1.4 microseconds, but during this time a CPU clocked at 2GHz could have performed 28 000 of instruction-processing cycles.

For network communications it can be even worse, just try and ping google.com

$ ping google.com
64 bytes from 172.217.16.174: icmp_seq=0 ttl=52 time=33.017 ms  
64 bytes from 172.217.16.174: icmp_seq=1 ttl=52 time=83.376 ms  
64 bytes from 172.217.16.174: icmp_seq=2 ttl=52 time=26.552 ms  
64 bytes from 172.217.16.174: icmp_seq=3 ttl=52 time=40.153 ms  
64 bytes from 172.217.16.174: icmp_seq=4 ttl=52 time=37.291 ms  
64 bytes from 172.217.16.174: icmp_seq=5 ttl=52 time=58.692 ms  
64 bytes from 172.217.16.174: icmp_seq=6 ttl=52 time=45.245 ms  
64 bytes from 172.217.16.174: icmp_seq=7 ttl=52 time=27.846 ms  

The average latency is about 44 milliseconds. Just while waiting for a packet to make a round-trip on the wire, the previously mentioned processor can perform 88 millions of cycles.

The solution

Most operational systems provide some kind of an Asynchronous IO interface, which allows you to start processing data that does not require the result of the communication, meanwhile the communication still goes on..

This can be achieved in several ways. Nowadays it is mostly done by leveraging the possibilities of multithreading at the cost of extra software complexity. For example reading a file in Java or Python is a blocking operation. Your program cannot do anything else while it is waiting for the network / disk communication to finish. All you can do - at least in Java - is to fire up a different thread then notify your main thread when the operation has finished.

It is tedious, complicated, but gets the job done. But what about Node? Well, we are surely facing some problems as Node.js - or more like V8 - is single-threaded. Our code can only run in one thread.

EDIT: This is not entirely true. Both Java and Python have async interfaces, but using them is definitely more difficult than in Node.js. Thanks to Shahar and Dirk Harrington for pointing this out.

You might have heard that in a browser, setting setTimeout(someFunction, 0) can sometimes fix things magically. But why does setting a timeout to 0, deferring execution by 0 milliseconds fix anything? Isn’t it the same as simply calling someFunction immediately? Not really.

First of all, let's take a look at the call stack, or simply, “stack”. I am going to make things simple, as we only need to understand the very basics of the call stack. In case you are familiar how it works, feel free to jump to the next section.

Stack

Whenever you call a functions return address, parameters and local variables will be pushed to the stack. If you call another function from the currently running function, its contents will be pushed on top in the same manner as the previous one - with its return address.

For the sake of simplicity I will say that 'a function is pushed' to the top of the stack from now on, even though it is not exactly correct.

Let's take a look!

 1 function main () {
 2   const hypotenuse = getLengthOfHypotenuse(3, 4)
 3   console.log(hypotenuse)
 4 }
 5
 6 function getLengthOfHypotenuse(a, b) {
 7   const squareA = square(a)
 8   const squareB = square(b)
 9   const sumOfSquares = squareA + squareB
10   return Math.sqrt(sumOfSquares)  
11 }  
12  
13 function square(number) {  
14   return number * number  
15 }  
16  
17 main()  

main is called first:

The main function

then main calls getLengthOfHypotenuse with 3 and 4 as arguments

The getLengthOfHypotenuse function

afterwards square is with the value of a

The square(a) function

when square returns, it is popped from the stack, and its return value is assigned to squareA. squareA is added to the stack frame of getLengthOfHypotenuse

Variable a

same goes for the next call to square

The square(b) function

Variable b

in the next line the expression squareA + squareB is evaluated

sumOfSquares

then Math.sqrt is called with sumOfSquares

Math.sqrt

now all is left for getLengthOfHypotenuse is to return the final value of its calculation

The return function

the returned value gets assigned to hypotenuse in main

hypotenuse

the value of hypotenuse is logged to console

The console log

finally, main returns without any value, gets popped from the stack leaving it empty

Finally

SIDE NOTE: You saw that local variables are popped from the stack when the functions execution finishes. It happens only when you work with simple values such as numbers, strings and booleans. Values of objects, arrays and such are stored in the heap and your variable is merely a pointer to them. If you pass on this variable, you will only pass the said pointer, making these values mutable in different stack frames. When the function is popped from the stack, only the pointer to the Object gets popped with leaving the actual value in the heap. The garbage collector is the guy who takes care of freeing up space once the objects outlived their usefulness.

Enter Node.js Event Loop

The Node.js Event Loop - cat version

No, not this loop. :)

So what happens when we call something like setTimeout, http.get, process.nextTick, or fs.readFile? Neither of these things can be found in V8's code, but they are available in the Chrome WebApi and the C++ API in case of Node.js. To understand this, we will have to understand the order of execution a little bit better.

Let's take a look at a more common Node.js application - a server listening on localhost:3000/. Upon getting a request, the server will call wttr.in/<city> to get the weather, print some kind messages to the console, and it forwards responses to the caller after recieving them.

'use strict'  
const express = require('express')  
const superagent = require('superagent')  
const app = express()

app.get('/', sendWeatherOfRandomCity)

function sendWeatherOfRandomCity (request, response) {  
  getWeatherOfRandomCity(request, response)
  sayHi()
}

const CITIES = [  
  'london',
  'newyork',
  'paris',
  'budapest',
  'warsaw',
  'rome',
  'madrid',
  'moscow',
  'beijing',
  'capetown',
]

function getWeatherOfRandomCity (request, response) {  
  const city = CITIES[Math.floor(Math.random() * CITIES.length)]
  superagent.get(`wttr.in/${city}`)
    .end((err, res) => {
      if (err) {
        console.log('O snap')
        return response.status(500).send('There was an error getting the weather, try looking out the window')
      }
      const responseText = res.text
      response.send(responseText)
      console.log('Got the weather')
    })

  console.log('Fetching the weather, please be patient')
}

function sayHi () {  
  console.log('Hi')
}

app.listen(3000)  

What will be printed out aside from getting the weather when a request is sent to localhost:3000?

If you have some experience with Node, you shouldn't be surprised that even though console.log('Fetching the weather, please be patient') is called after console.log('Got the weather') in the code, the former will print first resulting in:

Fetching the weather, please be patient  
Hi  
Got the weather  

What happened? Even though V8 is single-threaded, the underlying C++ API of Node isn't. It means that whenever we call something that is a non-blocking operation, Node will call some code that will run concurrently with our javascript code under the hood. Once this hiding thread receives the value it awaits for or throws an error, the provided callback will be called with the necessary parameters.

SIDE NOTE: The ‘some code’ we mentioned is actually part of libuv. libuv is the open source library that handles the thread-pool, doing signaling and all other magic that is needed to make the asynchronous tasks work. It was originally developed for Node.js but a lot of other projects use of it by now.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


To peek under the hood, we need to introduce two new concepts: the event loop and the task queue.

Task queue

Javascript is a single-threaded, event-driven language. This means that we can attach listeners to events, and when a said event fires, the listener executes the callback we provided.

Whenever you call setTimeout, http.get or fs.readFile, Node.js sends these operations to a different thread allowing V8 to keep executing our code. Node also calls the callback when the counter has run down or the IO / http operation has finished.

These callbacks can enqueue other tasks and those functions can enqueue others and so on. This way you can read a file while processing a request in your server, and then make an http call based on the read contents without blocking other requests from being handled.

"#nodejs sends IO operations to different threads so #v8 can keep executing our code" via @RisingStack #javascript

Click To Tweet

However, we only have one main thread and one call-stack, so in case there is another request being served when the said file is read, its callback will need to wait for the stack to become empty. The limbo where callbacks are waiting for their turn to be executed is called the task queue (or event queue, or message queue). Callbacks are being called in an infinite loop whenever the main thread has finished its previous task, hence the name 'event loop'.

In our previous example it would look something like this:

  1. express registers a handler for the 'request' event that will be called when request arrives to '/'
  2. skips the functions and starts listening on port 3000
  3. the stack is empty, waiting for 'request' event to fire
  4. upon incoming request, the long awaited event fires, express calls the provided handler sendWeatherOfRandomCity
  5. sendWeatherOfRandomCity is pushed to the stack
  6. getWeatherOfRandomCity is called and pushed to the stack
  7. Math.floor and Math.random are called, pushed to the stack and popped, a from cities is assigned to city
  8. superagent.get is called with 'wttr.in/${city}', the handler is set for the end event.
  9. the http request to http://wttr.in/${city} is send to a background thread, and the execution continues
  10. 'Fetching the weather, please be patient' is logged to the console, getWeatherOfRandomCity returns
  11. sayHi is called, 'Hi' is printed to the console
  12. sendWeatherOfRandomCity returns, gets popped from the stack leaving it empty
  13. waiting for http://wttr.in/${city} to send it's response
  14. once the response has arrived, the end event is fired.
  15. the anonymous handler we passed to .end() is called, gets pushed to the stack with all variables in its closure, meaning it can see and modify the values of express, superagent, app, CITIES, request, response, city and all the functions we have defined
  16. response.send() gets called either with 200 or 500 statusCode, but again it is sent to a background thread, so the response stream is not blocking our execution, anonymous handler is popped from the stack.

So now we can understand why the previously mentioned setTimeout hack works. Even though we set the counter to zero, it defers the execution until the current stack and the task queue is empty, allowing the browser to redraw the UI, or Node to serve other requests.

Microtasks and Macrotasks

If this wasn't enough, we actually have more then one task queue. One for microtasks and another for macrotasks.

examples of microtasks:

  • process.nextTick
  • promises
  • Object.observe

examples of macrotasks:

  • setTimeout
  • setInterval
  • setImmediate
  • I/O

Let's take a look at the following code:

console.log('script start')

const interval = setInterval(() => {  
  console.log('setInterval')
}, 0)

setTimeout(() => {  
  console.log('setTimeout 1')
  Promise.resolve().then(() => {
    console.log('promise 3')
  }).then(() => {
    console.log('promise 4')
  }).then(() => {
    setTimeout(() => {
      console.log('setTimeout 2')
      Promise.resolve().then(() => {
        console.log('promise 5')
      }).then(() => {
        console.log('promise 6')
      }).then(() => {
        clearInterval(interval)
      })
    }, 0)
  })
}, 0)

Promise.resolve().then(() => {  
  console.log('promise 1')
}).then(() => {
  console.log('promise 2')
})

this will log to the console:

script start  
promise1  
promise2  
setInterval  
setTimeout1  
promise3  
promise4  
setInterval  
setTimeout2  
setInterval  
promise5  
promise6  

According to the WHATVG specification, exactly one (macro)task should get processed from the macrotask queue in one cycle of the event loop. After said macrotask has finished, all of the available microtasks will be processed within the same cycle. While these microtasks are being processed, they can queue more microtasks, which will all be run one by one, until the microtask queue is exhausted.

This diagram tries to make the picture a bit clearer:

The Node.js Event Loop

In our case:

Cycle 1:

  1. `setInterval` is scheduled as task
  2. `setTimeout 1` is scheduled as task
  3. in `Promise.resolve 1` both `then`s are scheduled as microtasks
  4. the stack is empty, microtasks are run

Task queue: setInterval, setTimeout 1

Cycle 2:

  1. the microtask queue is empty, `setInteval`'s handler can be run, another `setInterval` is scheduled as a task, right behind `setTimeout 1`

Task queue: setTimeout 1, setInterval

Cycle 3:

  1. the microtask queue is empty, `setTimeout 1`'s handler can be run, `promise 3` and `promise 4` are scheduled as microtasks,
  2. handlers of `promise 3` and `promise 4` are run `setTimeout 2` is scheduled as task

Task queue: setInterval, setTimeout 2

Cycle 4:

  1. the microtask queue is empty, `setInteval`'s handler can be run, another `setInterval` is scheduled as a task, right behind `setTimeout`

Task queue: setTimeout 2, setInteval

  1. `setTimeout 2`'s handler run, `promise 5` and `promise 6` are scheduled as microtasks

Now handlers of promise 5 and promise 6 should be run clearing our interval, but for some strange reason setInterval is run again. However, if you run this code in Chrome, you will get the expected behavior.

We can fix this in Node too with process.nextTick and some mind-boggling callback hell.

console.log('script start')

const interval = setInterval(() => {  
  console.log('setInterval')
}, 0)

setTimeout(() => {  
  console.log('setTimeout 1')
  process.nextTick(() => {
    console.log('nextTick 3')
    process.nextTick(() => {
      console.log('nextTick 4')
      setTimeout(() => {
        console.log('setTimeout 2')
        process.nextTick(() => {
          console.log('nextTick 5')
          process.nextTick(() => {
            console.log('nextTick 6')
            clearInterval(interval)
          })
        })
      }, 0)
    })
  })
})

process.nextTick(() => {  
  console.log('nextTick 1')
  process.nextTick(() => {
    console.log('nextTick 2')
  })
})

This is the exact same logic as our beloved promises use, only a little bit more hideous. At least it gets the job done the way we expected.

Download the whole Node.js Under the Hood tutorial series and read it later

Tame the async beast!

As we saw, we need to manage and pay attention to both task queues, and to the event loop when we write an app in Node.js - in case we wish to leverage all its power, and if we want to keep our long running tasks from blocking the main thread.

The event loop might be a slippery concept to grasp at first, but once you get the hang of it, you won't be able to imagine that there is life without it. The continuation passing style that can lead to a callback hell might look ugly, but we have Promises, and soon we will have async-await in our hands... and while we are (a)waiting, you can simulate async-await using co and/or koa.

One last parting advice:

Knowing how Node.js and V8 handles long running executions, you can start using it for your own good. You might have heard before that you should send your long running loops to the task queue. You can do it by hand or make use of async.js.

Happy coding!

If you have any questions or thoughts, share them in the comments, I’ll be there! The next part of the Node.js at Scale series is discussing the Garbage Collection in Node.js, I recommend to check it out!

npm Publishing Tutorial - Node.js at Scale

npm Publishing Tutorial - Node.js at Scale

With Node.js at Scale we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

In this second chapter of Node.js at Scale you are going to learn how to expand the npm registry with your own modules. This tutorial is also going to explain how versioning works.

Upcoming chapters for the Node.js at Scale series:


npm Module Publishing

When writing Node.js apps, there are so many things on npm that can help us being more productive. We don't have to deal with low-level things like padding a string from the left because there are already existing modules that are (eventually) available on the npm registry.

Where do these modules come from?

The modules are stored in a huge registry which is powered by a CouchDB instance.

The official public npm registry is at https://registry.npmjs.org/. It is powered by a CouchDB database, which has a public mirror at https://skimdb.npmjs.com/registry. The code for the couchapp is available at https://github.com/npm/npm-registry-couchapp.

How do modules make it to the registry?

People like you write them for themselves or for their co-workers and they share the code with their fellow JavaScript developers.

When should I consider publishing?

  • If you want to share code between projects,
  • if you think that others might run into the very same problem and you'd like to help them,
  • if you have a bit (or even more) code that you think you can make use of later.

Creating a module

First let's create a module: npm init -y should take care of it, as you've learned in the previous post.

{
  "name": "npm-publishing",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "repository": {
    "type": "git",
    "url": "git+https://github.com/author/modulename"
  },
  "bugs": {
    "url": "https://github.com/caolan/async/issues"
  },
  "license": "ISC"
}

Let's break this down really quick. These fields in your package.json are mandatory when you're building a module for others to use.

First, you should give your module a distinct name because it has to be unique in the npm registry. Make sure it does not collide with any trademarks out there! main describes which file will be returned when your users do a require('modulename'). You can leave it as default or set it to any file in your project, but make sure you actually point it to a valid filename.

keywords should also be included because npm is going to index your package based on those fields and people will be able to find your module if they search those keywords in npm's search, or in any third party npm search site.

author, well obviously that's going to be you, but if anyone helps you develop your project be so kind to include them too! :) Also, it is very important to include where can people contact you if they'd like to.

In the repository field, you can see where the code is hosted and the bugs section tells you where can you file bugs if you find one in the package. To quickly jump to the bug report site you can use npm bug modulename.

#1 Licensing

Solid license and licenses adoption helps Node adoption by large companies. Code is a valuable resource, and sharing it has it's own costs.

Licensing is a really hard, but this site can help you pick one that fits your needs.

Generally when people publish modules to npm they use the MIT license.

The MIT License is a permissive free software license originating at the Massachusetts Institute of Technology (MIT). As a permissive license, it puts only very limited restriction on reuse and has therefore an excellent license compatibility.

#2 Semantic Versioning

Versioning is so important that it deserves its own section.

Most of the modules in the npm registry follow the specification called semantic versioning. Semantic versioning describes the version of a software as 3 numbers separated by "."-s. It describes how this version number has to change when changes are made to the software itself.

Given a version number MAJOR.MINOR.PATCH, increment the:

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality in a backwards-compatible manner, and
  • PATCH version when you make backwards-compatible bug fixes.

Additional labels for the pre-release and the build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

These numbers are for machines, not for humans! Don't assume that people will be discouraged from using your libraries when you often change the major version.

"If you break your API, think about your users and BUMP THAT MAJOR!" via @RisingStack #nodejs #semver

Click To Tweet

You have to start versioning at 1.0!

Most people think that doing changes while the software is still in "beta" phase should not respect the semantic versioning. They are wrong! It is really important to communicate breaking changes to your users even in beta phase. Always think about your users who want to experiment with your project.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


#3 Documentation

Having a proper documentation is imperative if you’d like to share your code with others. Putting a README.md file in your project’s root folder is usually enough, and if you publish it to the registry npm will generate a site like this one. It's all done automatically and it helps other people when they try to use your code.

Before publishing, make sure you have all documentation in place and up to date.

#4 Keeping secret files out of your package

Using a specific file called .npmignore will keep your secret or private files from publishing. Use that to your advantage, add files to .npmignore that you wish to not upload.

If you use .gitignore npm will use that too by default. Like git, npm looks for .npmignore and .gitignore files in all subdirectories of your package, not only in the root directory.

#5 Encouraging contributions

When you open up your code to the public, you should consider adding some guidelines for them on how to contribute. Make sure they know how to help you dealing with software bugs and adding new features to your module.

There are a few of these available, but in general you should consider using github's issue and pull-request templates.

npm publish

Now you understand everything that's necessary to publish your first module. To do so, you can type: npm publish and the npm-cli will upload the code to the registry.

Congratulations, your module is now public on the npm registry! Visit
www.npmjs.com/package/yourpackagename for the public URL.

If you published something public to npm, it's going to stay there forever. There is little you can do to make it non-discoverable. Once it hits the public registry, every other replica that's connected to it will copy all the data. Be careful when publishing.

I published something that I didn't mean to.

We're human. We make mistakes, but what can be done now? Since the recent leftpad scandal, npm changed the unpublish policy. If there is no package on the registry that depends on your package, then you're fine to unpublish it, but remember all the replicas will copy all the data so someone somewhere will always be able to get it. If it contained any secrets, make sure you change them after the act, and remember to add them to the .npmignore file for the next publish.

"If you accidentally published secrets to #npm change & add them to the .npmignore file!" via @RisingStack #nodejs

Click To Tweet

Private Scoped Packages

If you don't want or you're not allowed to publish code to a public registry (for any corporate reasons), npm allows organizations to open an organization account so that they can push to the registry without being public. This way you can share private code between you and your co-workers.

Further read on how to set it up: https://docs.npmjs.com/misc/scope

npm enterprise

If you'd like to further tighten your security by running a registry by yourself, you can do that pretty easily. npm has an on-premise version that can be run behind corporate firewalls. Read more about setting up npm enterprise.

Download the whole Learn using npm series as a single pdf

Build something!

Now that you know all these things, go and build something. If you’re up for a little bragging, make sure you tweet us (@risingstack) the name of the package this tutorial helped you to build! If you have any questions, you’ll find me in the comments.

Happy publishing!

In the next part of the Node.js at Scale series, you're going to learn about the Node.js module system and require.


npm Best Practices - Node.js at Scale

npm Best Practices - Node.js at Scale

Node Hero was a Node.js tutorial series with focusing on teaching the most essential Node.js best practices, so one can start developing applications using it.

With our new series, called Node.js at Scale, we are creating a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node.

In the first chapter of Node.js at Scale you are going to learn the best practices on using npm as well as tips and tricks that can save you a lot of time on a daily basis.

Upcoming chapters for the Node.js at Scale series:


npm Best Practices

npm install is the most common way of using the npm cli - but it has a lot more to offer! In this chapter of Node.js at Scale you will learn how npm can help you during the full lifecycle of your application - from starting a new project through development and deployment.

#0 Know your npm

Before diving into the topics, let's see some commands that help you with what version of npm you are running, or what commands are available.

npm versions

To get the version of the npm cli you are actively using, you can do the following:

$ npm --version
2.13.2  

npm can return a lot more than just its own version - it can return the version of the current package, the Node.js version you are using and OpenSSL or V8 versions:

$ npm version
{ bleak: '1.0.4',
  npm: '2.15.0',
  ares: '1.10.1-DEV',
  http_parser: '2.5.2',
  icu: '56.1',
  modules: '46',
  node: '4.4.2',
  openssl: '1.0.2g',
  uv: '1.8.0',
  v8: '4.5.103.35',
  zlib: '1.2.8' }

npm help

As most cli toolkits, npm has a great built-in help functionality as well. Description and synopsis are always available. These are essentially man-pages.

$ npm help test
NAME  
       npm-test - Test a package

SYNOPSIS  
           npm test [-- <args>]

           aliases: t, tst

DESCRIPTION  
       This runs a package's "test" script, if one was provided.

       To run tests as a condition of installation, set the npat config to true.

"9 npm best practices - a must-read collection for #nodejs developers" via @RisingStack

Click To Tweet

#1 Start new projects with npm init

When starting a new project npm init can help you a lot by interactively creating a package.json file. This will prompt questions for example on the project's name or description. However, there is a quicker solution!

$ npm init --yes

If you use npm init --yes, it won't prompt for anything, just create a package.json with your defaults. To set these defaults, you can use the following commands:

npm config set init.author.name YOUR_NAME  
npm config set init.author.email YOUR_EMAIL  



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


#2 Finding npm packages

Finding the right packages can be quite challenging - there are hundreds of thousands of modules you can choose from. We know this from experience, and developers participating in our latest Node.js survey also told us that selecting the right npm package is frustrating. Let's try to pick a module that helps us sending HTTP requests!

One website that makes the task a lot easier is npms.io. It shows metrics like quality, popularity and maintenance. These are calculated based on whether a module has outdated dependencies, does it have linters configured, is it covered with tests or when the most recent commit was made.

finding npm packages

#3 Investigate npm packages

Once we picked our module (which will be the request module in our example), we should take a look at the documentation, and check out the open issues to get a better picture of what we are going to require into our application. Don’t forget that the more npm packages you use, the higher the risk of having a vulnerable or malicious one. If you’d like to read more on npm-related security risks, read our related guideline.

If you'd like to open the homepage of the module from the cli you can do:

$ npm home request

To check open issues or the publicly available roadmap (if there’s any), you can try this:

$ npm bugs request

Alternatively, if you'd just like to check a module's git repository, type this:

$ npm repo request

#4 Saving dependencies

Once you found the package you want to include in your project, you have to install and save it. The most common way of doing that is by using npm install request.

If you'd like to take that one step forward and automatically add it to your package.json file, you can do:

$ npm install request --save

npm will save your dependencies with the ^ prefix by default. It means that during the next npm install the latest module without a major version bump will be installed. To change this behaviour, you can:

$ npm config set save-prefix='~'

In case you'd like to save the exact version, you can try:

$ npm config set save-exact true

#5 Lock down dependencies

Even if you save modules with exact version numbers as shown in the previous section, you should be aware that most npm module authors don't. It’s totally fine, they do it to get patches and features automatically.

The situation can easily become problematic for production deployments: It’s possible to have different versions locally then on production, if in the meantime someone just released a new version. The problem will arise, when this new version has some bug which will affect your production system.

To solve this issue, you may want to use npm shrinkwrap. It will generate an npm-shrinkwrap.json that contains not just the exact versions of the modules installed on your machine, but also the version of its dependencies, and so on. Once you have this file in place, npm install will use it to reproduce the same dependency tree.

#6 Check for outdated dependencies

To check for outdated dependencies, npm comes with a built-in tool method the npm outdated command. You have to run in the project's directory which you'd like to check.

$ npm outdated
conventional-changelog    0.5.3   0.5.3   1.1.0  @risingstack/docker-node  
eslint-config-standard    4.4.0   4.4.0   6.0.1  @risingstack/docker-node  
eslint-plugin-standard    1.3.1   1.3.1   2.0.0  @risingstack/docker-node  
rimraf                    2.5.1   2.5.1   2.5.4  @risingstack/docker-node  

Once you maintain more projects, it can become an overwhelming task to keep all your dependencies up to date in each of your projects. To automate this task, you can use Greenkeeper which will automatically send pull requests to your repositories once a dependency is updated.

#7 No devDepenendencies in production

Development dependencies are called development dependencies for a reason - you don't have to install them in production. It makes your deployment artifacts smaller and more secure, as you will have less modules in production which can have security problems.

To install production dependencies only, run this:

$ npm install --production

Alternatively, you can set the NODE_ENV environment variable to production:

$ NODE_ENV=production npm install

"Don't install development dependencies in production" via @RisingStack #nodejs

Click To Tweet

#8 Secure your projects and tokens

In case of using npm with a logged in user, your npm token will be placed in the .npmrc file. As a lot of developers store dotfiles on GitHub, sometimes these tokens get published by accident. Currently, there are thousands of results when searching for the .npmrc file on GitHub, with a huge percentage containing tokens. If you have dotfiles in your repositories, double check that your credentials are not pushed!

Another source of possible security issues are the files which are published to npm by accident. By default npm respects the .gitignore file, and files matching those rules won't be published. However, if you add an .npmignore file, it will override the content of .gitignore - so they won't be merged.

#9 Developing packages

When developing packages locally, you usually want to try them out with one of your projects before publish to npm. This is where npm link comes to the rescue.

What npm link does is that it creates a symlink in the global folder that links to the package where the npm link was executed.

You can run npm link package-name from another location, to create a symbolic link from the globally installed package-name to the /node_modules directory of the current folder.

"Use npm link to test packages locally" via @RisingStack #nodejs

Click To Tweet

Let's see it in action!

# create a symlink to the global folder
/projects/request $ npm link

# link request to the current node_modules
/projects/my-server $ npm link request

# after running this project, the require('request') 
# will include the module from projects/request

Download the whole Learn using npm series as a single pdf

Next up on Node.js at Scale: SemVer and Module Publishing

The next article in the Node.js at Scale series will be a SemVer deep dive with how to publish Node.js modules.

Let me know if you have any questions in the comments!

Writing a JavaScript Framework - Introduction to Data Binding, beyond Dirty Checking

Writing a JavaScript Framework - Introduction to Data Binding, beyond Dirty Checking

This is the fourth chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the dirty checking and the accessor data binding techniques and point out their strengths and weaknesses.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction (current chapter)
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client side routing

An introduction to data binding

Data binding is a general technique that binds data sources from the provider and consumer together and synchronizes them.

This is a general definition, which outlines the common building blocks of data binding techniques.

  • A syntax to define the provider and the consumer.
  • A syntax to define which changes should trigger synchronization.
  • A way to listen to these changes on the provider.
  • A synchronizing function that runs when these changes happen. I will call this function the handler() from now on.

The above steps are implemented in different ways by the different data binding techniques. The upcoming sections will be about two such techniques, namely dirty checking and the accessor method. Both has their strengths and weaknesses, which I will briefly discuss after introducing them.

Dirty checking

Dirty checking is probably the most well-known data binding method. It is simple in concept, and it doesn't require complex language features, which makes it a nice candidate for legacy usage.

The syntax

Defining the provider and the consumer doesn't require any special syntax, just plain Javascript objects.

const provider = {  
  message: 'Hello World'
}
const consumer = document.createElement('p')  

Synchronization is usually triggered by property mutations on the provider. Properties, which should be observed for changes must be explicitly mapped with their handler().

observe(provider, 'message', message => {  
  consumer.innerHTML = message
})

The observe() function simply saves the (provider, property) -> handler mapping for later use.

function observe (provider, prop, handler) {  
  provider._handlers[prop] = handler
}

With this, we have a syntax for defining the provider and the consumer and a way to register handler() functions for property changes. The public API of our library is ready, now comes the internal implementation.

Listening on changes

Dirty checking is called dirty for a reason. It runs periodical checks instead of listening on property changes directly. Let's call this check a digest cycle from now on. A digest cycle iterates through every (provider, property) -> handler entry added by observe() and checks if the property value changed since the last iteration. If it did change, it runs the handler() function. A simple implementation would look like below.

function digest () {  
  providers.forEach(digestProvider)
}

function digestProvider (provider) {  
  for (let prop in provider._handlers) {
    if (provider._prevValues[prop] !== provider[prop]) {
      provider._prevValues[prop] = provider[prop]
      handler(provider[prop])
    }
  }
}

The digest() function needs to be run from time to time to ensure a synchronized state.

The accessor technique

The accessor technique is the now trending one. It is a bit less widely supported as it requires the ES5 getter/setter functionality, but it makes up for this in elegance.

The syntax

Defining the provider requires special syntax. The plain provider object has to be passed to the observable() function, which transforms it into an observable object.

const provider = observable({  
  greeting: 'Hello',
  subject: 'World'
})
const consumer = document.createElement('p')  

This small inconvenience is more than compensated by the simple handler() mapping syntax. With dirty checking, we would have to define every observed property explicitly like below.

observe(provider, 'greeting', greeting => {  
  consumer.innerHTML = greeting + ' ' + provider.subject
})

observe(provider, 'subject', subject => {  
  consumer.innerHTML = provider.greeting + ' ' + subject
})

This is verbose and clumsy. The accessor technique can automatically detect the used provider properties inside the handler() function, which allows us to simplify the above code.

observe(() => {  
  consumer.innerHTML = provider.greeting + ' ' + provider.subject
})

The implementation of observe() is different from the dirty checking one. It just executes the passed handler() function and flags it as the currently active one while it is running.

let activeHandler

function observe(handler) {  
  activeHandler = handler
  handler()
  activeHandler = undefined
}

Note that we exploit the single-threaded nature of JavaScript here by using the single activeHandler variable to keep track of the currently running handler() function.

Listening on changes

This is where the 'accessor technique' name comes from. The provider is augmented with getters/setters, which do the heavy lifting in the background. The idea is to intercept the get/set operations of the provider properties in the following way.

  • get: If there is an activeHandler running, save the (provider, property) -> activeHandler mapping for later use.
  • set: Run all handler() functions, which are mapped with the (provide, property) pair.

The accessor data binding technique.

The following code demonstrates a simple implementation of this for a single provider property.

function observableProp (provider, prop) {  
  const value = provider[prop]
  Object.defineProperty(provider, prop, {
    get () {
      if (activeHandler) {
        provider._handlers[prop] = activeHandler
      }
      return value
    },
    set (newValue) {
      value = newValue
      const handler = obj._handlers[prop]
      if (handler) {
        activeHandler = handler
        handler()
        activeHandler = undefined
      }
    }
  })
}

The observable() function mentioned in the previous section walks the provider properties recursively and converts all of them into observables with the above observableProp() function.

function observable (provider) {  
  for (let prop in provider) {
    observableProp(provider, prop)
    if (typeof provider[prop] === 'object') {
      observable(provider[prop])
    }
  }
}

This is a very simple implementation, but it is enough for a comparison between the two techniques.

Comparison of the techniques

In this section, I will briefly outline the strengths and weaknesses of dirty checking and the accessor technique.

Syntax

Dirty checking requires no syntax to define the provider and consumer, but mapping the (provider, property) pair with the handler() is clumsy and not flexible.

The accessor technique requires the provider to be wrapped by the observable() function, but the automatic handler() mapping makes up for this. For large projects with data binding, it is a must have feature.

Performance

Dirty checking is notorious for its bad performance. It has to check every (provider, property) -> handler entry possibly multiple times during every digest cycle. Moreover, it has to grind even when the app is idle, since it can't know when the property changes happen.

The accessor method is faster, but performance could be unnecessarily degraded in case of big observable objects. Replacing every property of the provider by accessors is usually an overkill. A solution would be to build the getter/setter tree dynamically when needed, instead of doing it ahead in one batch. Alternatively, a simpler solution is wrapping the unneeded properties with a noObserve() function, that tells observable() to leave that part untouched. This sadly introduces some extra syntax.

Flexibility

Dirty checking naturally works with both expando (dynamically added) and accessor properties.

The accessor technique has a weak spot here. Expando properties are not supported because they are left out of the initial getter/setter tree. This causes issues with arrays for example, but it can be fixed by manually running observableProp() after adding a new property. Getter/setter properties are neither supported since accessors can't be wrapped by accessors again. A common workaround for this is using a computed() function instead of a getter. This introduces even more custom syntax.

Timing alternatives

Dirty checking doesn't give us much freedom here since we have no way of knowing when the actual property changes happen. The handler() functions can only be executed asynchronously, by running the digest() cycle from time to time.

Getters/setters added by the accessor technique are triggered synchronously, so we have a freedom of choice. We may decide to run the handler() right away, or save it in a batch that is executed asynchronously later. The first approach gives us the advantage of predictability, while the latter allows for performance enhancements by removing duplicates.

About the next article

In the next article, I will introduce the nx-observe data binding library and explain how to replace ES5 getters/setters by ES6 Proxies to eliminate most of the accessor technique's weaknesses.

Conclusion

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss data binding with ES6 Proxies!

If you have any thoughts on the topic, please share them in the comments.


JavaScript Garbage Collection Improvements - Orinoco

JavaScript Garbage Collection Improvements - Orinoco

This article summarizes the three main effect of the new V8 upgrade (5.0) on JavaScript garbage collection.

In the latest releases of Node.js, the V8 JavaScript engine was upgraded to version 5.0. Among new ES2015 features, it includes three major improvements for the garbage collector. These changes lay the groundwork for a new garbage collector in V8, code-named Orinoco.

V8 implements a generational garbage collector - meaning it has a memory segment called new space for the young generation, and has an old space for the old generation. New objects are allocated in the new space, and if they survive two garbage collections in the new space, they are moved to the old space.

#1: Parallelised JavaScript Garbage Collection

The problem with the two spaces is that moving objects between them is expensive: the objects need to be copied over, and the pointers must be updated. Moving out from the young generation is called evacuation, while adding it to the old generation is called compaction.

Since there are no dependencies between young generation evacuation and old generation compaction, the garbage collector can perform these in parallel, resulting in a reduction of compaction time of 75% from ~7ms to under 2ms on average.

Node.js garbage collector parallel from JavaScript Garbage Collection

"The Parallelised Garbage Collector can reduce compaction time by 75%" via @RisingStack #javascript #nodejs

Click To Tweet

#2: Track Pointers Improvement

When an object is moved on the heap, the garbage collector has to update all the pointers - but first, it has to find all the pointers to the old location. For this V8 uses a data structure called remembered set to keep track of interesting pointers in the heap. A pointer is classified interesting if the next garbage collector run may move it:

  • moving objects from the young generation to the old generation,
  • pointers to objects in fragmented pages, as objects may be moved to other pages during compaction

In older versions of V8, remembered sets are implemented using store buffers. This contains addresses of all incoming pointers. The problem is that it may result in duplicated entries, because a store buffer may end up including a pointer multiple times and two store buffers may have the same pointer. This would make parallelization of the pointer update really complex.



Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!


Instead of dealing with the extra complexity, Orinoco removes it by reorganizing the remembered set. Instead of the previous approach, now each page stores the offsets of the interesting pointers originating from the given page. With this technique the parallel updates of the pointers become feasible.

It has a huge performance impact - it can reduce the maximum pause time of compacting garbage collection by 40%.

#3: Black Allocation

The Black Allocation introduced by Orinoco is involved in the marking phase of the garbage collector. With this, once an object is allocated in the old generation, they are marked black instantly, meaning that they are "live" objects. The goal of this allocation is that objects allocated in the old space are likely to live longer, so they should survive the next garbage collector. Objects colored black are not visited by the garbage collector - they are put on black pages of the old space.

It speeds up the garbage collection due the faster-marking progress as well as less garbage collection work in general.


For more information on V8 updates, follow the V8 project's blog.

Let me know if you have any questions or additional thoughts in the comments!


Node Hero - Monitoring Node.js Applications

Node Hero - Monitoring Node.js Applications

This article is the 13th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the last article of the series, I’m going to show you how to do Node.js monitoring and how to find advanced issues in production environments.

Upcoming and past chapters:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. Deploying Node.js application to a PaaS
  13. Monitoring Node.js Applications [you are reading it now]

The Importance of Node.js Monitoring

Getting insights into production systems is critical when you are building Node.js applications! You have an obligation to constantly detect bottlenecks and figure out what slows your product down.

An even greater issue is to handle and preempt downtimes. You must be notified as soon as they happen, preferably before your customers start to complain. Based on these needs, proper monitoring should give you at least the following features and insights into your application's behavior:

  • Profiling on a code level: You have to understand how much time does it take to run each function in a production environment, not just locally.

  • Monitoring network connections: If you are building a microservices architecture, you have to monitor network connections and lower delays in the communication between your services.

  • Performance dashboard: Knowing and constantly seeing the most important performance metrics of your application is essential to have a fast, stable production system.

  • Real-time alerting: For obvious reasons, if anything goes down, you need to get notified immediately. This means that you need tools that can integrate with Pagerduty or Opsgenie - so your DevOps team won’t miss anything important.

"Getting insights into production systems is critical when you are building #nodejs applications" via @RisingStack

Click To Tweet

Server Monitoring versus Application Monitoring

One concept developers usually apt to confuse is monitoring servers and monitoring the applications themselves. As we tend to do a lot of virtualization, these concepts should be treated separately, as a single server can host dozens of applications.

Let’s go trough the major differences!

Server Monitoring

Server monitoring is responsible for the host machine. It should be able to help you answer the following questions:

  • Does my server have enough disk space?
  • Does it have enough CPU time?
  • Does my server have enough memory?
  • Can it reach the network?

For server monitoring, you can use tools like zabbix.

Application Monitoring

Application monitoring, on the other hand, is responsible for the health of a given application instance. It should let you know the answers to the following questions:

  • Can an instance reach the database?
  • How much request does it handle?
  • What are the response times for the individual instances?
  • Can my application serve requests? Is it up?

For application monitoring, I recommend using our tool called Trace. What else? :)

We developed it to be an easy to use and efficient tool that you can use to monitor and debug applications from the moment you start building them, up to the point when you have a huge production app with hundreds of services.

How to Use Trace for Node.js Monitoring

To get started with Trace, head over to https://trace.risingstack.com and create your free account!

Once you registered, follow these steps to add Trace to your Node.js applications. It only takes up a minute - and these are the steps you should perform:

Start Node.js monitoring with these steps

Easy, right? If everything went well, you should see that the service you connected has just started sending data to Trace:

Reporting service in Trace for Node.js Monitoring

#1: Measure your performance

As the first step of monitoring your Node.js application, I recommend to head over to the metrics page and check out the performance of your services.

Basic Node.js performance metrics

  • You can use the response time panel to check out median and 95th percentile response data. It helps you to figure out when and why your application slows down and how it affects your users.
  • The throughput graph shows request per minutes (rpm) for status code categories (200-299 // 300-399 // 400-499 // >500 ). This way you can easily separate healthy and problematic HTTP requests within your application.
  • The memory usage graph shows how much memory your process uses. It’s quite useful for recognizing memory leaks and preempting crashes.

Advanced Node.js Monitoring Metrics

If you’d like to see special Node.js metrics, check out the garbage collection and event loop graphs. Those can help you to hunt down memory leaks. Read our metrics documentation.

#2: Set up alerts

As I mentioned earlier, you need a proper alerting system in action for your production application.

Go the alerting page of Trace and click on Create a new alert.

  • The most important thing to do here is to set up downtime and memory alerts. Trace will notify you on email / Slack / Pagerduty / Opsgenie, and you can use Webhooks as well.

  • I recommend setting up the alert we call Error rate by status code to know about HTTP requests with 4XX or 5XX status codes. These are errors you should definitely care about.

  • It can also be useful to create an alert for Response time - and get notified when your app starts to slow down.

#3: Investigate memory heapdumps

Go to the Profiler page and request a new memory heapdump, wait 5 minutes and request another. Download them and open them on Chrome DevTool’s Profiles page. Select the second one (the most recent one), and click Comparison.

chrome heap snapshot for finding a node.js memory leak

With this view, you can easily find memory leaks in your application. In a previous article I’ve written about this process in a detailed way, you can read it here: Hunting a Ghost - Finding a Memory Leak in Node.js

#4: CPU profiling

Profiling on the code level is essential to understand how much time does your function take to run in the actual production environment. Luckily, Trace has this area covered too.

All you have to do is to head over to the CPU Profiles tab on the Profiling page. Here you can request and download a profile which you can load into the Chrome DevTool as well.

CPU profiling in Trace

Once you loaded it, you'll be able to see the 10 second timeframe of your application and see all of your functions with times and URL's as well.

With this data, you'll be able to figure out what slows down your application and deal with it!

Download the whole Node Hero series as a single pdf

The End

Update: as a sequel to Node Hero, we have started a new series called Node.js at Scale. Check it out if you are interested in more in-depth articles!

This is it.

During the 13 episodes of the Node Hero series, you learned the basics of building great applications with Node.js.

I hope you enjoyed it and improved a lot! Please share this series with your friends if you think they need it as well - and show them Trace too. It’s a great tool for Node.js development!

If you have any questions regarding Node.js monitoring, let me know in the comments section!


Writing a JavaScript framework - Execution timing, beyond setTimeout

Writing a JavaScript framework - Execution timing, beyond setTimeout

This is the second chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of executing asynchronous code in the browser. You will read about the event loop and the differences between timing techniques, like setTimeout and Promises.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing (current chapter)
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client side routing

Async code execution

Most of you are probably familiar with Promise, process.nextTick(), setTimeout() and maybe requestAnimationFrame() as ways of executing asynchronous code. They all use the event loop internally, but they behave quite differently regarding precise timing.

In this chapter, I will explain the differences, then show you how to implement a timing system that a modern framework, like NX requires. Instead of reinventing the wheel we will use the native event loop to achieve our goals.

The event loop

The event loop is not even mentioned in the ES6 spec. JavaScript only has jobs and job queues on its own. A more complex event loop is specified separately by NodeJS and the HTML5 spec. Since this series is about the front-end I will explain the latter one here.

The event loop is called a loop for a reason. It is infinitely looping and looking for new tasks to execute. A single iteration of this loop is called a tick. The code executed during a tick is called a task.

while (eventLoop.waitForTask()) {  
  eventLoop.processNextTask()
}

Tasks are synchronous pieces of code that may schedule other tasks in the loop. An easy programmatic way to schedule a new task is setTimeout(taskFn). However, tasks may come from several other sources like user events, networking or DOM manipulation.

Execution timing: Event loop with tasks

Task queues

To complicate things a bit, the event loop can have multiple task queues. The only two restrictions are that events from the same task source must belong to the same queue and tasks must be processed in insertion order in every queue. Apart from these, the user agent is free to do as it wills. For example, it may decide which task queue to process next.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {
    taskQueue.processNextTask()
  }
}

With this model, we loose precise control over timing. The browser may decide to totally empty several other queues before it gets to our task scheduled with setTimeout().

Execution timing: Event loop with task queues

The microtask queue

Fortunately, the event loop also has a single queue called the microtask queue. The microtask queue is completely emptied in every tick after the current task finished executing.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {
    taskQueue.processNextTask()
  }

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {
    microtaskQueue.processNextMicrotask()
  }
}

The easiest way to schedule a microtask is Promise.resolve().then(microtaskFn). Microtasks are processed in insertion order, and since there is only one microtask queue, the user agent can't mess with us this time.

Moreover, microtasks can schedule new microtasks that will be inserted in the same queue and processed in the same tick.

Execution timing: Event loop with microtask queue

Rendering

The last thing missing is the rendering schedule. Unlike event handling or parsing, rendering is not done by separate background tasks. It is an algorithm that may run at the end of every loop tick.

The user agent has a lot of freedom again: It may render after every task, but it may decide to let hundreds of tasks execute without rendering.

Fortunately, there is requestAnimationFrame(), that executes the passed function right before the next render. Our final event loop model looks like this.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {
    taskQueue.processNextTask()
  }

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {
    microtaskQueue.processNextMicrotask()
  }

  if (shouldRender()) {
    applyScrollResizeAndCSS()
    runAnimationFrames()
    render()
  }
}

Execution timing: Event loop with rendering

Now let’s use all this knowledge to build a timing system!

Using the event loop

As most modern frameworks, NX deals with DOM manipulation and data binding in the background. It batches operations and executes them asynchronously for better performance. To time these things right it relies on Promises, MutationObservers and requestAnimationFrame().

The desired timing is this:

  1. Code from the developer
  2. Data binding and DOM manipulation reactions by NX
  3. Hooks defined by the developer
  4. Rendering by the user agent

#Step 1

NX registers object mutations with ES6 Proxies and DOM mutations with a MutationObserver synchronously (more about these in the next chapters). It delays the reactions as microtasks until step 2 for optimized performance. This delay is done by Promise.resolve().then(reaction) for object mutations, and handled automatically by the MutationObserver as it uses microtasks internally.

#Step 2

The code (task) from the developer finished running. The microtask reactions registered by NX start executing. Since they are microtasks they run in order. Note that we are still in the same loop tick.

#Step 3

NX runs the hooks passed by the developer using requestAnimationFrame(hook). This may happen in a later loop tick. The important thing is that the hooks run before the next render and after all data, DOM and CSS changes are processed.

#Step 4

The browser renders the next view. This may also happen in a later loop tick, but it never happens before the previous steps in a tick.

Things to keep in mind

We just implemented a simple but effective timing system on top of the native event loop. It works well in theory, but timing is a delicate thing, and slight mistakes can cause some very strange bugs.

In a complex system, it is important to set up some rules about the timing and keep to them later. For NX I have the following rules.

  1. Never use setTimeout(fn, 0) for internal operations
  2. Register microtasks with the same method
  3. Reserve microtasks for internal operations only
  4. Do not pollute the developer hook execution time window with anything else

#Rule 1 and 2

Reactions on data and DOM manipulation should execute in the order the manipulations happened. It is okay to delay them as long as their execution order is not mixed up. Mixing execution order makes things unpredictable and difficult to reason about.
setTimeout(fn, 0) is totally unpredictable. Registering microtasks with different methods also leads to mixed up execution order. For example microtask2 would incorrectly execute before microtask1 in the example below.

Promise.resolve().then().then(microtask1)  
Promise.resolve().then(microtask2)  

Execution timing: Microtask registration method

#Rule 3 and 4

Separating the time window of the developer code execution and the internal operations is important. Mixing these two would start to cause seemingly unpredictable behavior, and it would eventually force developers to learn about the internal working of the framework. I think many front-end developers have experiences like this already.

Conclusion

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss sandboxed code evaluation!

If you have any thoughts on the topic, please share them in the comments.