Controlling the Node.js security risk of npm dependencies

This article is a guest post from Guy Podjarny, CEO at Snyk, building dev tools to fix known vulnerabilities in open source components

Open source packages - and npm specifically - are undoubtedly awesome. They make developers extremely productive by giving each of us a wealth of existing functionality just waiting to be consumed. If we had to write all this functionality ourselves, we'd struggle to create a fraction of what we do today.

As a result, a typical Node.js application today consumes LOTS of npm packages, often hundreds or thousands of them. What we often overlook, however, is that each of these packages, alongside its functionality, also pulls in its Node.js security risks. Many packages open new ports, thus increasing the attack surface. Roughly 76% of Node shops use vulnerable packages, some of which are extremely severe; and open source projects regularly grow stale, neglecting to fix security flaws.

Inevitably, using npm packages will expose you to security risks. Fortunately, there are several questions you can ask which can reduce your risk substantially. This post outlines these questions, and how to get them answered.

#1: Which packages am I using?

The more packages you use, the higher the risk of having a vulnerable or malicious package amongst them. This holds true not only for the packages you use directly but also for the indirect dependencies they use.

Discovering your dependencies is as easy as running npm ls in your application’s parent folder, which lists the packages you use. You can use the --prod argument to only display production dependencies (which impact your security the most), and add --long to get a short description of each package. Check out this post to better understand how you can slice and dice your npm depedencies.

~/proj/node_redis $ npm ls --prod --long
[email protected]  
│ /Users/guypod/localproj/playground/node_redis
│ Redis client library
│ git://
├── [email protected]
│   Extremely fast double-ended queue implementation
│   git://
├── [email protected]
│   Redis commands
│   git+
└── [email protected]
    Javascript Redis protocol (RESP) parser

Figure: Inventorying Node’s redis few dependencies

A new crop of Dependency Management services, such as bitHound and VersionEye, can also list the dependencies you use, as well as track some of the information below.

"The more #npm packages you use, the higher the risk of having a vulnerable or malicious one." via @RisingStack

Click To Tweet

Now that you know what you have, you can ask a few questions to assess the risk each package involves. Below are a few examples of questions you should ask, why you should ask them, and suggestions on how you can get them answered.

#2: Am I still using this package?

As time goes by and your code changes, you’re likely to stop using certain packages and add new ones instead. However, developers don’t typically remove a package from the project when they stop using it, as some other part of the code may need it.

As a result, projects have a tendency to accumulate unused dependencies. While not directly a security concern, these dependencies unnecessarily grow your attack surface and add clutter to the code. For instance, an attacker may trick one package into loading an unused package with a more severe vulnerability, escalating the potential damage.

"Projects have a tendency to accumulate unused dependencies and quickly grow your attack surface!" via @RisingStack

Click To Tweet

Checking for unused dependencies is most easily done using the depcheck tool. depcheck scans your code for requires and import commands, correlate those with the packages installed or mentioned in your package.json, and provide a report. The command can be tweaked in various ways using command flags, making it easy to automate checking for unused dep’s.

~/proj/Hardy $ depcheck
Unused dependencies  
* cucumber
* selenium-standalone
Unused devDependencies  
* jasmine-node

Figure: Checking for unused dependencies on the Hardy project

#3: Are other developers using this package?

Packages used by many are also more closely watched. The likelihood of someone having already encountered and addressed a security issue on them is higher than in a less utilized package.

For example, the secure-compare package was created to support string comparison that was not susceptible to a timing attack. However, a fundamental flaw in the package led to achieving the exact opposite, making certain comparisons extremely time sensitive (and incorrect).

If you looked more closely, you’d see this package is very lightly used, downloaded only 20 times a day. If this were a more popular package, odds are someone would have found and reported the functional flaw sooner.

The easiest way to assess package usage is its download rate, indicated in the “Stats” section of the npm’s package page. You can extract those stats automatically using the npm stats API, or browse historic stats on Alternatively, you can look at the number of “Dependent” packages – other packages that use the current one.

Node.js Security: Download stats for the `redis` package

#4: Am I using the latest version of this package?

Bugs, including security bugs, are constantly found and – hopefully – fixed. Also, it’s quite common to see newly reported vulnerabilities fixed only on the newest major branch of a project.

For instance, in early 2016, a Regular Expression Denial of Service (ReDoS) vulnerability was reported on the HMAC package hawk. ReDoS is a vulnerability where a long or carefully crafted input causes a regular expression match to take a very long time to compute. The processing thread does not serve new requests in the meantime, enabling a denial of service attack with just a small number of requests.

The vulnerability in hawk was quickly fixed in its latest major version stream, 4.x, but left older versions without a fix. Specifically, it left an unfixed vulnerability in the widely used request package, which used [email protected] The author later accepted Snyk’s pull-request with a fix for the 3.x branch, but request users were exposed for a while, and the issue still exists in the older major release branches. This is just one example, but as a general rule, your dependencies are less likely to have security bugs if they’re on the latest version.

You can find out whether or not you’re using the latest version using the npm outdated command. This command also supports the --prod flag to ignore dev dependencies, as well as --json to simplify automation. You can also use Greenkeeper to proactively inform you when you're not using the latest version.

~/proj/handlebars.js $ npm outdated --prod
Package     Current  Wanted  Latest  Location  
async         1.5.2   1.5.2   2.0.1  handlebars  
source-map    0.4.4   0.4.4   0.5.6  handlebars  
uglify-js     2.6.2   2.7.3   2.7.3  handlebars  
yargs        3.32.0  3.32.0   5.0.0  handlebars  

Figure: npm outdated on handlebars prod dependencies

#5: When was this package last updated?

Creating an open source project, including npm packages, is fun. Many talented developers create such projects in their spare time, investing a lot of time and energy in making them good. Over time, however, the excitement often wears off, and life changes can make it hard to find the needed time.

As a result, npm packages often grow stale, not adding features and fixing bugs slowly – if at all. This reality isn’t great for functionality, but it’s especially problematic for security. Functional bugs typically only get in your way when you’re building something new, allowing some leeway for how quickly they’re addressed. Fixing security vulnerabilities is more urgent – once they become known, attackers may exploit them, and so time to fix is critical.

"Fixing security vulnerabilities is urgent – once they become known, attackers will exploit them!" via @RisingStack

Click To Tweet

A good example of this case is a Cross-Site Scripting vulnerability in the marked package. Marked is a popular markdown parsing package, downloaded nearly 2M times a month. Initially released in mid-2011, Marked evolved rapidly over the next couple of years, but the pace slowed in 2014 and work stopped completely in mid-2015.

The XSS vulnerability was disclosed around the same time, and it has remained untouched ever since. The only way to protect yourself from the issue is to stop using marked, or use a Snyk patch, as explained below.

Inspecting your packages for their last update date is a good way to reduce the change you’ll find yourself in such a predicament. You can do so via the npm UI or by running npm view <package> time.modified.

$ npm view marked time.modified

Figure: checking last modified time on marked

#6: How many maintainers do these packages have?

Many npm packages only have a single maintainer, or a very small number of those. While there’s nothing specifically wrong with that, those packages do have a higher risk of getting abandoned. In addition, larger teams are more likely to have at least some members who better understand and care more about security.

Identifying the packages that have only a few maintainers is a good way to assess your risk. Tracking npm maintainers is easily automated by using npm view <pkg> maintainers.

$ npm view express maintainers

[ 'dougwilson <[email protected]>',
  'hacksparrow <[email protected]>',
  'jasnell <[email protected]>',
  'mikeal <[email protected]>' ]

Figure: maintainers of the express package, per npm

However, many packages with a full team behind them publish automatically through a single npm account. Therefore, you’ll do well to also inspect the GitHub repository used to develop this package (the vast majority of npm packages are developed on GitHub). In the example above, you'll find there are 192 contributors to the express repo. Many only made one or two commits, but that's still quite a difference from the 4 listed npm maintainers.

You can find the relevant GitHub repository by running npm view <pkg> repository, and then subsequently run curl<repo-user>/<repo-name>/contributors. For instance, for the marked package, you would first run npm view marked repository, and then curl Alternatively, you can easily see the maintainers, GitHub repository and its contributors via the npm and GitHub web UI.

#7: Does this package have known security vulnerabilities?

The questions above primarily reflect the risk of a future problem. However, your dependencies may be bringing in some security flaws right now! Roughly 15% of packages carry a known vulnerability, in either in their own code or in the dependencies they in turn bring in. According to Snyk's data, about 76% of Node shops use vulnerable dependencies in their applications.

You can easily find such vulnerable packages using Snyk. You can run snyk test in your terminal, or quickly test your GitHub repositories for vulnerable dependencies through the web UI. Snyk's test page holds other testing options.

"15% of #npm packages carry a known vulnerability, 76% of Node shops use vulnerable dependencies!" via @RisingStack

Click To Tweet

Snyk also makes it easy to fix the found issues, using snyk wizard in the terminal or an automated fix pull request. Fixes are done using guided upgrades or open source patches. Snyk creates these patches by back-porting the original fix, and they are stored as part of its Open Source vulnerability database.

Node.js Security: Test git repos with Snyk

Once you're free of vulnerabilities, you should ensure code changes don't make you vulnerable again. If you're using Snyk, you can test if pull requests introduce a vulnerable dependency, or add a test such as snyk test to your build process.

Lastly, when a new vulnerability is disclosed, you'd want to learn about it before attackers do. New vulnerabilities are independently of your code changes, so a CI test is not sufficient. To get an email (and a fix pull request) from Snyk whenever a new vulnerability affects you, click "Watch" on the "Test my Repositories" page, or run snyk monitor when you deploy new code.

Solving Node.js Security

npm packages are amazing, and let us build software at an unprecedented pace. You should definitely keep using npm packages - but there's no reason to do so blindly. We covered 7 questions you can easily answer to understand better, and reduce your security exposure:

  1. Which packages am I using? And for each one...
  2. Am I still using this package?
  3. Are other developers using this package?
  4. Am I using the latest version of this package?
  5. When was this package last updated?
  6. How many maintainers do these packages have?
  7. Does this package have known security vulnerabilities?

Answer those, and you'll be both productive and secure!

If you have any thoughts or questions on the topic, please share them in the comments.

This article is a guest post from Guy Podjarny, CEO at Snyk, building dev tools to fix known vulnerabilities in open source components

Announcing NodeConf Budapest 2017

We are happy to announce, that NodeConf is coming to Budapest again! After the successful NodeConf One Shot Budapest and countless meetups, it is time to go bigger, and create NodeConf Budapest.

NodeConf Budapest, 20 January 2017

nodeconf budapest

The event will take place on Friday, 20 January 2017. The conference will be a single track, one-day event with 20 minutes long talks.

Call For Papers

If you'd like to be involved, we'd love to hear your proposal, even if you have never spoken at a conference before, so get in touch!

We believe that the diversity of our speakers and attendees provides the most engaging environment for discussion. We will factor this philosophy into our speakers evaluation.

Whether this is your first proposal or whether you're a seasoned pro, we'll do our best to help you shape your talk to make sure it flows with the day.

Please include the following information:

  • Title
  • Description (300 words or less)
  • 2-3 learning objectives
  • City of residence

Ready to send the CPF? Send yours using our GitHub page:

CFP submission date - 15 Oct, 2016 - - -> Speaker selections - 30 Oct, 2016

Early Bird Tickets

Early bird tickets are available for purchase - you can grab yours right here:


You can ping us on our Twitter channel - -, or join our Slack channel:

Until then, check out this amazing video from Mathias Buus on BitTorrent, p2p with Node.js from last year's One Shot NodeConf Budapest!

Writing a JavaScript framework - Sandboxed code evaluation

This is the third chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of evaluating code in the browser and the issues they cause. I will also introduce a method, which relies on some new or lesser known JavaScript features.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation (current chapter)
  4. Data binding (part 1)
  5. Data binding (part 2)
  6. Custom elements
  7. Client side routing

The evil eval

The eval() function evaluates JavaScript code represented as a string.

A common solution for code evaluation is the eval() function. Code evaluated by eval() has access to closures and the global scope, which leads to a security issue called code injection and makes eval() one of the most notorious features of JavaScript.

Despite being frowned upon, eval() is very useful in some situations. Most modern front-end frameworks require its functionality but don't dare to use it because of the issue mentioned above. As a result, many alternative solutions emerged for evaluating strings in a sandbox instead of the global scope. The sandbox prevents the code from accessing secure data. Usually it is a simple JavaScript object, which replaces the global object for the evaluated code.

The common way

The most common eval() alternative is complete re-implementation - a two-step process, which consists of parsing and interpreting the passed string. First the parser creates an abstract syntax tree, then the interpreter walks the tree and interprets it as code inside a sandbox.

This is a widely used solution, but it is arguably too heavy for such a simple thing. Rewriting everything from scratch instead of patching eval() introduces a lot of bug opportunities and it requires frequent modifications to follow the latest language updates as well.

An alternative way

NX tries to avoid re-implementing native code. Evaluation is handled by a tiny library that uses some new or lesser known JavaScript features.

This section will progressively introduce these features and use them to explain the nx-compile code evaluation library. The library has a function called compileCode(), which works like below.

const code = compileCode('return num1 + num2')

// this logs 17 to the console
console.log(code({num1: 10, num2: 7}))

const globalNum = 12  
const otherCode = compileCode('return globalNum')

// global scope access is prevented
// this logs undefined to the console
console.log(otherCode({num1: 2, num2: 3}))  

By the end of this article, we will implement the compileCode() function in less than 20 lines.

new Function()

The Function constructor creates a new Function object. In JavaScript, every function is actually a Function object.

The Function constructor is an alternative to eval(). new Function(...args, 'funcBody') evaluates the passed 'funcBody' string as code and returns a new function that executes that code. It differs from eval() in two major ways.

  • It evaluates the passed code just once. Calling the returned function will run the code without re-evaluating it.
  • It doesn't have access to local closure variables, however, it can still access the global scope.
function compileCode (src) {  
  return new Function(src)

new Function() is a better alternative to eval() for our use case. It has superior performance and security, but global scope access still has to be prevented to make it viable.

The 'with' keyword

The with statement extends the scope chain for a statement.

with is a lesser known keyword in JavaScript. It allows a semi-sandboxed execution. The code inside a with block tries to retrieve variables from the passed sandbox object first, but if it doesn't find it there, it looks for the variable in the closure and global scope. Closure scope access is prevented by new Function() so we only have to worry about the global scope.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  return new Function('sandbox', src)

with uses the in operator internally. For every variable access inside the block, it evaluates the variable in sandbox condition. If the condition is truthy, it retrieves the variable from the sandbox. Otherwise, it looks for the variable in the global scope. By fooling with to always evaluate variable in sandbox as truthy, we could prevent it from accessing the global scope.

Sandboxed code evaluation: Simple 'with' statement

ES6 proxies

The Proxy object is used to define custom behavior for fundamental operations like property lookup or assignment.

An ES6 Proxy wraps an object and defines trap functions, which may intercept fundamental operations on that object. Trap functions are invoked when an operation occurs. By wrapping the sandbox object in a Proxy and defining a has trap, we can overwrite the default behavior of the in operator.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    const sandboxProxy = new Proxy(sandbox, {has})
    return code(sandboxProxy)

// this trap intercepts 'in' operations on sandboxProxy
function has (target, key) {  
  return true

The above code fools the with block. variable in sandbox will always evaluate to true because the has trap always returns true. The code inside the with block will never try to access the global object.

Sandboxed code evaluation: 'with' statement and proxies


A symbol is a unique and immutable data type and may be used as an identifier for object properties.

Symbol.unscopables is a well-known symbol. A well-known symbol is a built-in JavaScript Symbol, which represents internal language behavior. Well-known symbols can be used to add or overwrite iteration or primitive conversion behavior for example.

The Symbol.unscopables well-known symbol is used to specify an object value of whose own and inherited property names are excluded from the 'with' environment bindings.

Symbol.unscopables defines the unscopable properties of an object. Unscopable properties are never retrieved from the sandbox object in with statements, instead they are retrieved straight from the closure or global scope. Symbol.unscopables is a very rarely used feature. You can read about the reason it was introduced on this page.

Sandboxed code evaluation: 'with' statement and proxies. A security issue.

We can fix above issue by defining a get trap on the sandbox Proxy, which intercepts Symbol.unscopables retrieval and always returns undefined. This will fool the with block into thinking that our sandbox object has no unscopable properties.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    const sandboxProxy = new Proxy(sandbox, {has, get})
    return code(sandboxProxy)

function has (target, key) {  
  return true

function get (target, key) {  
  if (key === Symbol.unscopables) return undefined
  return target[key]

Sandboxed code evaluation: 'with' statement and proxies. Has and get traps.

WeakMaps for caching

The code is now secure, but its performance can be still upgraded, since it creates a new Proxy on every invocation of the returned function. This can be prevented by caching and using the same Proxy for every function call with the same sandbox object.

A proxy belongs to a sandbox object, so we could simply add the proxy to the sandbox object as a property. However, this would expose our implementation details to the public, and it wouldn't work in case of an immutable sandbox object frozen with Object.freeze(). Using a WeakMap is a better alternative in this case.

The WeakMap object is a collection of key/value pairs in which the keys are weakly referenced. The keys must be objects, and the values can be arbitrary values.

A WeakMap can be used to attach data to an object without directly extending it with properties. We can use WeakMaps to indirectly add the cached Proxies to the sandbox objects.

const sandboxProxies = new WeakMap()

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    if (!sandboxProxies.has(sandbox)) {
      const sandboxProxy = new Proxy(sandbox, {has, get})
      sandboxProxies.set(sandbox, sandboxProxy)
    return code(sandboxProxies.get(sandbox))

function has (target, key) {  
  return true

function get (target, key) {  
  if (key === Symbol.unscopables) return undefined
  return target[key]

This way only one Proxy will be created per sandbox object.

Final notes

The above compileCode() example is a working sandboxed code evaluator in just 19 lines of code. If you would like to see the full source code of the nx-compile library, you can find it in this Github repository.

Apart from explaining code evaluation, the goal of this chapter was to show how new ES6 features can be used to alter the existing ones, instead of re-inventing them. I tried to demonstrate the full power of Proxies and Symbols through the examples.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss data binding!

If you have any thoughts on the topic, please share them in the comments.

Node Hero - Monitoring Node.js Applications

This article is the 13th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the last article of the series, I’m going to show you how to do Node.js monitoring and how to find advanced issues in production environments.

Upcoming and past chapters:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. Deploying Node.js application to a PaaS
  13. Monitoring Node.js Applications [you are reading it now]

The Importance of Node.js Monitoring

Getting insights into production systems is critical when you are building Node.js applications! You have an obligation to constantly detect bottlenecks and figure out what slows your product down.

An even greater issue is to handle and preempt downtimes. You must be notified as soon as they happen, preferably before your customers start to complain. Based on these needs, proper monitoring should give you at least the following features and insights into your application's behavior:

  • Profiling on a code level: You have to understand how much time does it take to run each function in a production environment, not just locally.

  • Monitoring network connections: If you are building a microservices architecture, you have to monitor network connections and lower delays in the communication between your services.

  • Performance dashboard: Knowing and constantly seeing the most important performance metrics of your application is essential to have a fast, stable production system.

  • Real-time alerting: For obvious reasons, if anything goes down, you need to get notified immediately. This means that you need tools that can integrate with Pagerduty or Opsgenie - so your DevOps team won’t miss anything important.

"Getting insights into production systems is critical when you are building #nodejs applications" via @RisingStack

Click To Tweet

Server Monitoring versus Application Monitoring

One concept developers usually apt to confuse is monitoring servers and monitoring the applications themselves. As we tend to do a lot of virtualization, these concepts should be treated separately, as a single server can host dozens of applications.

Let’s go trough the major differences!

Server Monitoring

Server monitoring is responsible for the host machine. It should be able to help you answer the following questions:

  • Does my server have enough disk space?
  • Does it have enough CPU time?
  • Does my server have enough memory?
  • Can it reach the network?

For server monitoring, you can use tools like zabbix.

Application Monitoring

Application monitoring, on the other hand, is responsible for the health of a given application instance. It should let you know the answers to the following questions:

  • Can an instance reach the database?
  • How much request does it handle?
  • What are the response times for the individual instances?
  • Can my application serve requests? Is it up?

For application monitoring, I recommend using our tool called Trace. What else? :)

We developed it to be an easy to use and efficient tool that you can use to monitor and debug applications from the moment you start building them, up to the point when you have a huge production app with hundreds of services.

How to Use Trace for Node.js Monitoring

To get started with Trace, head over to and create your free account!

Once you registered, follow these steps to add Trace to your Node.js applications. It only takes up a minute - and these are the steps you should perform:

Start Node.js monitoring with these steps

Easy, right? If everything went well, you should see that the service you connected has just started sending data to Trace:

Reporting service in Trace for Node.js Monitoring

#1: Measure your performance

As the first step of monitoring your Node.js application, I recommend to head over to the metrics page and check out the performance of your services.

Basic Node.js performance metrics

  • You can use the response time panel to check out median and 95th percentile response data. It helps you to figure out when and why your application slows down and how it affects your users.
  • The throughput graph shows request per minutes (rpm) for status code categories (200-299 // 300-399 // 400-499 // >500 ). This way you can easily separate healthy and problematic HTTP requests within your application.
  • The memory usage graph shows how much memory your process uses. It’s quite useful for recognizing memory leaks and preempting crashes.

Advanced Node.js Monitoring Metrics

If you’d like to see special Node.js metrics, check out the garbage collection and event loop graphs. Those can help you to hunt down memory leaks. Read our metrics documentation.

#2: Set up alerts

As I mentioned earlier, you need a proper alerting system in action for your production application.

Go the alerting page of Trace and click on Create a new alert.

  • The most important thing to do here is to set up downtime and memory alerts. Trace will notify you on email / Slack / Pagerduty / Opsgenie, and you can use Webhooks as well.

  • I recommend setting up the alert we call Error rate by status code to know about HTTP requests with 4XX or 5XX status codes. These are errors you should definitely care about.

  • It can also be useful to create an alert for Response time - and get notified when your app starts to slow down.

#3: Investigate memory heapdumps

Go to the Profiler page and request a new memory heapdump, wait 5 minutes and request another. Download them and open them on Chrome DevTool’s Profiles page. Select the second one (the most recent one), and click Comparison.

chrome heap snapshot for finding a node.js memory leak

With this view, you can easily find memory leaks in your application. In a previous article I’ve written about this process in a detailed way, you can read it here: Hunting a Ghost - Finding a Memory Leak in Node.js

#4: CPU profiling

Profiling on the code level is essential to understand how much time does your function take to run in the actual production environment. Luckily, Trace has this area covered too.

All you have to do is to head over to the CPU Profiles tab on the Profiling page. Here you can request and download a profile which you can load into the Chrome DevTool as well.

CPU profiling in Trace

Once you loaded it, you'll be able to see the 10 second timeframe of your application and see all of your functions with times and URL's as well.

With this data, you'll be able to figure out what slows down your application and deal with it!

The End

This is it.

During the 13 episodes of the Node Hero series, you learned the basics of building great applications with Node.js.

I hope you enjoyed it and improved a lot! Please share this series with your friends if you think they need it as well - and show them Trace too. It’s a great tool for Node.js development!

If you have any questions regarding Node.js monitoring, let me know in the comments section!

Writing a JavaScript framework - Execution timing, beyond setTimeout

This is the second chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of executing asynchronous code in the browser. You will read about the event loop and the differences between timing techniques, like setTimeout and Promises.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing (current chapter)
  3. Sandboxed code evaluation
  4. Data binding (part 1)
  5. Data binding (part 2)
  6. Custom elements
  7. Client side routing

Async code execution

Most of you are probably familiar with Promise, process.nextTick(), setTimeout() and maybe requestAnimationFrame() as ways of executing asynchronous code. They all use the event loop internally, but they behave quite differently regarding precise timing.

In this chapter, I will explain the differences, then show you how to implement a timing system that a modern framework, like NX requires. Instead of reinventing the wheel we will use the native event loop to achieve our goals.

The event loop

The event loop is not even mentioned in the ES6 spec. JavaScript only has jobs and job queues on its own. A more complex event loop is specified separately by NodeJS and the HTML5 spec. Since this series is about the front-end I will explain the latter one here.

The event loop is called a loop for a reason. It is infinitely looping and looking for new tasks to execute. A single iteration of this loop is called a tick. The code executed during a tick is called a task.

while (eventLoop.waitForTask()) {  

Tasks are synchronous pieces of code that may schedule other tasks in the loop. An easy programmatic way to schedule a new task is setTimeout(taskFn). However, tasks may come from several other sources like user events, networking or DOM manipulation.

Execution timing: Event loop with tasks

Task queues

To complicate things a bit, the event loop can have multiple task queues. The only two restrictions are that events from the same task source must belong to the same queue and tasks must be processed in insertion order in every queue. Apart from these, the user agent is free to do as it wills. For example, it may decide which task queue to process next.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

With this model, we loose precise control over timing. The browser may decide to totally empty several other queues before it gets to our task scheduled with setTimeout().

Execution timing: Event loop with task queues

The microtask queue

Fortunately, the event loop also has a single queue called the microtask queue. The microtask queue is completely emptied in every tick after the current task finished executing.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {

The easiest way to schedule a microtask is Promise.resolve().then(microtaskFn). Microtasks are processed in insertion order, and since there is only one microtask queue, the user agent can't mess with us this time.

Moreover, microtasks can schedule new microtasks that will be inserted in the same queue and processed in the same tick.

Execution timing: Event loop with microtask queue


The last thing missing is the rendering schedule. Unlike event handling or parsing, rendering is not done by separate background tasks. It is an algorithm that may run at the end of every loop tick.

The user agent has a lot of freedom again: It may render after every task, but it may decide to let hundreds of tasks execute without rendering.

Fortunately, there is requestAnimationFrame(), that executes the passed function right before the next render. Our final event loop model looks like this.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {

  if (shouldRender()) {

Execution timing: Event loop with rendering

Now let’s use all this knowledge to build a timing system!

Using the event loop

As most modern frameworks, NX deals with DOM manipulation and data binding in the background. It batches operations and executes them asynchronously for better performance. To time these things right it relies on Promises, MutationObservers and requestAnimationFrame().

The desired timing is this:

  1. Code from the developer
  2. Data binding and DOM manipulation reactions by NX
  3. Hooks defined by the developer
  4. Rendering by the user agent

#Step 1

NX registers object mutations with ES6 Proxies and DOM mutations with a MutationObserver synchronously (more about these in the next chapters). It delays the reactions as microtasks until step 2 for optimized performance. This delay is done by Promise.resolve().then(reaction) for object mutations, and handled automatically by the MutationObserver as it uses microtasks internally.

#Step 2

The code (task) from the developer finished running. The microtask reactions registered by NX start executing. Since they are microtasks they run in order. Note that we are still in the same loop tick.

#Step 3

NX runs the hooks passed by the developer using requestAnimationFrame(hook). This may happen in a later loop tick. The important thing is that the hooks run before the next render and after all data, DOM and CSS changes are processed.

#Step 4

The browser renders the next view. This may also happen in a later loop tick, but it never happens before the previous steps in a tick.

Things to keep in mind

We just implemented a simple but effective timing system on top of the native event loop. It works well in theory, but timing is a delicate thing, and slight mistakes can cause some very strange bugs.

In a complex system, it is important to set up some rules about the timing and keep to them later. For NX I have the following rules.

  1. Never use setTimeout(fn, 0) for internal operations
  2. Register microtasks with the same method
  3. Reserve microtasks for internal operations only
  4. Do not pollute the developer hook execution time window with anything else

#Rule 1 and 2

Reactions on data and DOM manipulation should execute in the order the manipulations happened. It is okay to delay them as long as their execution order is not mixed up. Mixing execution order makes things unpredictable and difficult to reason about.
setTimeout(fn, 0) is totally unpredictable. Registering microtasks with different methods also leads to mixed up execution order. For example microtask2 would incorrectly execute before microtask1 in the example below.


Execution timing: Microtask registration method

#Rule 3 and 4

Separating the time window of the developer code execution and the internal operations is important. Mixing these two would start to cause seemingly unpredictable behavior, and it would eventually force developers to learn about the internal working of the framework. I think many front-end developers have experiences like this already.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss sandboxed code evaluation!

If you have any thoughts on the topic, please share them in the comments.