Trace Node.js Monitoring
Need help with Node.js?
Learn more
Need Node.js support?
Learn more
Bertalan Miklos's Picture

Bertalan Miklos

I am a JavaScript developer at RisingStack, working on Trace - a Node.js microservice monitoring tool.

7 posts

Writing a JavaScript Framework - Client-Side Routing

This is the last chapter of the Writing a JavaScript framework series. In this chapter, I am going to discuss how client-side routing in JavaScript differs from server-side routing and why should it be treated differently.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client-side routing (current chapter)

Routing on the web

Web pages are either server-side rendered, client-side rendered or they use a mix of both. Either way, a semi-complex web page has to deal with routing.

For server-rendered pages routing is handled on the backend. A new page is served when the URL path or the query parameters change, which is perfect for traditional web pages. However, web applications usually keep state about the current user, which would be hard to maintain between the myriad of server-rendered pages.

Client-side frameworks solve these issues by prefetching the app and switching between the stored pages without losing the state. Front-end routing can be implemented very similarly to its server-side counterpart. The only difference is that it fetches the resources straight from the client instead of the server. In this article, I will explain why I think the two should be handled a bit differently, though.

Backend inspired routing

A lot of front-end routing libraries are inspired by the server-side.

They simply run the appropriate route handler on URL changes, which boots and renders the required component. The structure is similar on both ends of the web, the only difference is what the handler functions do.

To demonstrate the similarities, you can find the same routing snippet in the server-side Express framework, the client-side page.js router and React below.

// Express
app.get('/login', sendLoginPage)  
app.get('/app/:user/:account', sendApp)  
// Page.js
page('/login', renderLoginPage)  
page('/app/:user/:account', renderApp)  
<!-- React -->  
  <Route path="/login" component={Login}/>
  <Route path="/app/:user/:account" component={App}/>

React hides the logic behind some JSX, but they all do the same, and they all work perfectly until dynamic parameters are introduced.

In the above examples, a single user may have multiple accounts and the current account can be freely changed. If the account is changed in the App page, the appropriate handler reboots or resends the same App component for the new account - while it might be enough to update some data in the existing component.

This is not a big issue for VDOM based solutions - since they diff the DOM and update the needed parts only - but for traditional frameworks, it can mean a lot of unnecessary work.

Dealing with dynamic parameters

Rerendering the whole page on parameter changes is something I wanted to avoid. To tackle the problem I separated the route from the dynamic parameters first.

In NX, the route determines which component or view is displayed, and it goes into the URL pathname. The dynamic parameters control what data is displayed in the current page, and they are always in the query parameters.

This means that the /app/:user/:account route would transform into /app?user=userId&account=accountId. It is slightly more verbose but it is clearer, and it allowed me to separate client-side routing into page routing and parameter routing. The former navigates in the app shell, while the latter navigates in the data shell.

The app shell

You might be familiar with the app shell model, which was popularized by Google together with Progressive Web Apps.

The app shell is the minimal HTML, CSS and JavaScript required to power the user interface.

In NX, the path routing is responsible for navigating in the app shell. A simple routing structure looks like this.

  <h2 route="login"/>Login page</h2>
  <h2 route="app"/>The app</h2>

It is similar to the previous examples - especially the React one - but there is one major difference. It doesn't deal with the user and account parameters. Instead, it simply navigates in the empty app shell.

This makes it a dead simple tree walking problem. The router tree is walked - based on the URL pathname - and it displays the components it finds in its way.

Client-Side Routing with JavaScript - Path Routing Diagram

The above diagram explains how the current view is determined for the /settings/profile URL. You can find the accompanying code below.

<a iref="home">Home</a>  
<a iref="settings">Settings</a>  
  <h2 route="home" default-route>Home page</h2>
  <div route="settings">
    <h2>Settings page</h2>
    <a iref="./profile">Profile</a>
    <a iref="./privacy">Privacy</a>
      <h3 route="profile" default-route>Profile settings</h3>
      <h3 route="privacy">Privacy settings</h3>

This example demonstrates a nested router structure with default and relative routes. As you can see, it is simple enough to be configured by HTML only and it works similarly to most file systems. You can navigate inside it with absolute (home) and relative (./privacy) links. The routing snippet looks like below in action.

Client-Side Routing - Path Routing Example

This simple structure can be abused to create powerful patterns. One example is parallel routing, where multiple router trees are walked at the same time. The side menu and the content in the NX docs page works this way. It has two parallel nested routers, which change the side navigation's and the page's content simultaneously.

The data shell

Unlike the app shell, the 'data shell' is not a hyped term. In fact, it is used only by me, and it refers to the pool of dynamic parameters, which drives the data flow. Rather than changing the current page, it only changes the data inside the page. Changing the current page usually changes the parameter pool, but changing a parameter in the pool does not cause a page reboot.

Typically the data shell is formed by a set of primitive values and - together with the current page - it represents the state of the application. As such, it can be used to save, load or share the state. In order to do this, it must be reflected in the URL, the local storage or the browser history - which makes it inherently global.

The NX control component - among many others - can hook into the parameter pool with a declarative config, which determines how the parameters should interact with the component's state, the URL, the history and the web storage.

  template: require('./view.html'),
  params: {
    name: { history: true, url: true, default: 'World' }
<p>Name: <input type="text" name="name" bind/></p>  
<p>Hello @{name}</p>  

The above example creates a component, which keeps its name property in sync with the URL and the browser history. You can see it in action below.

Parameter routing example

Thanks to the ES6 Proxy based transparent reactivity, the synchronization is seamless. You can write vanilla JavaScript, and things will two-way synchronize in the background when needed. The below diagram gives a high-level overview of this.

Parameter routing diagram

The simple, declarative syntax encourages developers to spend a few minutes with designing the web integration of the page before coding. Not all parameters should go into the URL or add a new history item on change. There are plenty of different use cases, and each should be configured appropriately.

  • A simple text filter should be a url parameter as it should be shareable with other users.

  • An account id should be a url and history parameter, as the current account should be shareable and changing it is drastic enough to add a new history item.

  • A visual preference should be a durable parameter (saved in the local storage) as it should be persisted for each user and it shouldn't be shared.

These are just some of the possible settings. With a minimal effort you can really get the parameters to fit your use case perfectly.

Putting it together

Path routing and parameter routing are independent of each other, but they are designed to work nicely together. Path routing navigates to the desired page in the app shell, then parameter routing takes over and manages the state and the data shell.

The parameter pool may differ between pages, so there is an explicit API for changing the current page and parameters in both JavaScript and HTML.

<a iref="newPage" $iref-params="{ newParam: 'value' }"></a>  
  to: 'newPage',
  params: { newParam: 'value' }

On top of this, NX automatically adds an active CSS class to active links, and you can configure all of the common routing features - like parameter inheritance and router events - with the options config.

Check the routing docs for more about these features.

A Client-Side Routing Example

The below example demonstrates parameter routing combined with a reactive data flow. It is a fully working NX app. Just copy the code into an empty HTML file and open it in a modern browser to try it out.

<script src=""></script>

  params: {
    title: { history: true, url: true, default: 'Gladiator' }

function setup (comp, state) {  
  comp.$observe(() => {
    fetch('' + state.title)
      .then(response => response.json())
      .then(data => state.plot = data.Plot || 'No plot found')

  <h2>Movie plotter</h2>
  <p>Title: <input type="text" name="title" bind /></p>
  <p>Plot: @{plot}</p>

The state's title property is automatically kept in sync with the URL and the browser history. The function passed the comp.$observe is observed, and it automatically fetches the appropriate movie plot whenever the title changes. This creates a powerful reactive data flow which integrates perfectly with the browser.

Example app with dynamic parameters

This app doesn't demonstrate path routing. For some more complete examples please check the intro app, the NX Hacker News clone or the path routing and parameter routing docs pages. Both have editable examples.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github organization - split between many repos.

The Writing a JavaScript Framework series is complete with this article, thanks for reading! If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript Framework - The Benefits of Custom Elements

This is the sixth chapter of the Writing a JavaScript framework series. In this chapter, I am going to discuss the usefulness of Custom Elements and their possible role in a modern front-end framework's core.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements (current chapter)
  7. Client-side routing

The era of components

Components took over the web in the recent years. All of the modern front-end frameworks - like React, Vue or Polymer - utilize component based modularization. They provide distinct APIs and work differently under the hood, but they all share the following features with many of the other recent frameworks.

  • They have an API for defining components and registering them by name or with a selector.

  • They provide lifecycle hooks, which can be used to set up the component's logic and to synchronize the view with the state.

These features were missing a simple native API until recently, but this changed with the finalization of the Custom Elements spec. Custom Elements can cover the above features, but they are not always a perfect fit. Let's see why!

Custom Elements

Custom Elements are part of the Web Components standard, which started as an idea in 2011 and resulted in two different specs before stabilizing recently. The final version feels like a simple native alternative to component based frameworks instead of a tool for framework authors. It provides a nice high-level API for defining components, but it lacks new non polyfillable features.

If you are not yet familiar with Custom Elements please take a look at this article before going on.

The Custom Elements API

The Custom Elements API is based on ES6 classes. Elements can inherit from native HTML elements or other Custom Elements, and they can be extended with new properties and methods. They can also overwrite a set of methods - defined in the spec - which hook into their lifecycle.

class MyElement extends HTMLElement {  
  // these are standard hooks, called on certain events
  constructor() { ... }
  connectedCallback () { ... }
  disconnectedCallback () { ... }
  adoptedCallback () { ... }
  attributeChangedCallback (attrName, oldVal, newVal) { ... }

  // these are custom methods and properties
  get myProp () { ... }
  set myProp () { ... }
  myMethod () { ... }

// this registers the Custom Element
customElements.define('my-element', MyElement)  

After being defined, the elements can be instantiated by name in the HTML or JavaScript code.


The class-based API is very clean, but in my opinion, it lacks flexibility. As a framework author, I preferred the deprecated v0 API - which was based on old school prototypes.

const MyElementProto = Object.create(HTMLElement.prototype)

// native hooks
MyElementProto.attachedCallback = ...  
MyElementProto.detachedCallback = ...

// custom properties and methods
MyElementProto.myMethod = ...

document.registerElement('my-element', { prototype: MyElementProto })  

It is arguably less elegant, but it can integrate nicely with both ES6 and pre ES6 code. On the other hand, using some pre ES6 features together with classes can get pretty complex.

As an example, I need the ability to control which HTML interface the component inherits from. ES6 classes use the static extends keyword for inheritance, and they require the developer to type in MyClass extends ChosenHTMLInterface.

It is far from ideal for my use case since NX is based on middleware functions rather than classes. In NX, the interface can be set with the element config property, which accepts a valid HTML element's name - like button.

nx.component({ element: 'button' })  

To achieve this, I had to imitate ES6 classes with the prototype based system. Long story short, it is more painful than one might think and it requires the non polyfillable ES6 Reflect.construct and the performance killer Object.setPrototypeOf functions.

  function MyElement () {
    return Reflect.construct(HTMLElement, [], MyElement)
  const myProto = MyElement.prototype
  Object.setPrototypeOf(myProto, HTMLElement.prototype)
  Object.setPrototypeOf(MyElement, HTMLElement)
  myProto.connectedCallback = ...
  myProto.disconnectedCallback = ...
  customElements.define('my-element', MyElement)

This is just one of the occasions when I found working with ES6 classes clumsy. I think they are nice for everyday usage, but when I need the full power of the language, I prefer to use prototypal inheritance.

Lifecycle hooks

Custom Elements have five lifecycle hooks that are invoked synchronously on certain events.

  • constructor is called on the element's instantiation.
  • connectedCallback is called when the element is attached to the DOM.
  • disconnectedCallback is called when the element is detached from the DOM.
  • adoptedCallback is called when the element is adopted to a new document with importNode or cloneNode.
  • attributeChangedCallback is called when a watched attribute of the element changes.

constructor and connectedCallback are ideal for setting up the component's state and logic, while attributeChangedCallback can be used to reflect the component's properties with HTML attributes and vice versa. disconnectedCallback is useful for cleaning up after the component instance.

When combined, these can cover a nice set of functionalities, but I still miss a beforeDisconnected and childrenChanged callback. A beforeDisconnected hook would be useful for non-hackish leave animations, but there is no way to implement it without wrapping or heavily patching the DOM.

The childrenChanged hook is essential for creating a bridge between the state and the view. Take a look at the following example.

  .use((elem, state) => = 'World')
  <p>Hello: ${name}!</p>

It is a simple templating snippet, which interpolates the name property from the state into the view. In case the user decides to replace the p element with something else, the framework has to be notified about the change. It has to clean up after the old p element and apply the interpolation to the new content. childrenChanged might not be exposed as a developer hook, but knowing when a component's content mutates is a must for frameworks.

As I mentioned, Custom Elements lacks a childrenChanged callback, but it can be implemented with the older MutationObserver API. MutationObservers also provide alternatives for the connectedCallback, disconnectedCallback and attributeChangedCallback hooks for older browsers.

// create an observer instance
const observer = new MutationObserver(onMutations)

function onMutations (mutations) {  
  for (let mutation of mutations) {
    // handle mutation.addedNodes, mutation.removedNodes, mutation.attributeName and mutation.oldValue here

// listen for attribute and child mutations on `MyComponentInstance` and all of its ancestors
observer.observe(MyComponentInstance, {  
  attributes: true,
  childList: true,
  subtree: true

This might raise some questions about the necessity of Custom Elements, apart from their simple API.

In the next sections, I will cover some key differences between MutationObservers and Custom Elements and explain when to use which.

Custom Elements vs MutationObservers

Custom Element callbacks are invoked synchronously on DOM mutations, while MutationObservers gather mutations and invoke the callbacks asynchronously for a batch of them. This is not a big issue for setup logic, but it can cause some unexpected bugs during cleaning up. Having a small interval when the disposed data is still hanging around is dangerous.

Another important difference is that MutationObservers do not pierce the shadow DOM boundary. Listening for mutations inside a shadow DOM require Custom Elements or manually adding a MutationObserver to the shadow root. If you never heard about the shadow DOM, you can learn more about it here.

Finally, they offer a slightly different set of hooks. Custom Elements have the adoptedCallback hook, while MutationObservers can listen on text change and child mutations in any depth.

Considering all of these, combining the two to get the best of both worlds is a good idea.

Combining Custom Elements with MutationObservers

Since Custom Elements are not yet widely supported, MutationObservers must be used for detecting DOM mutations. There are two options for using them.

  • Building an API on top of Custom Elements and using MutationObservers for polyfilling them.

  • Building an API with MutationObservers and using Custom Elements to add some improvements when they are available.

I chose the latter option, as MutationObservers are required to detect child mutations even in browsers with full Custom Elements support.

The system that I will use for the next version of NX simply adds a MutationObserver to the document in older browsers. However, in modern browsers, it uses Custom Elements to set up hooks for the topmost components and adds a MutationObserver to them inside the connectedCallback hook. This MutationObserver than takes the role of detecting further mutations inside the component.

It looks for changes only inside the part of the document which is controlled by the framework. The responsible code looks roughly like this.

function registerRoot (name) {  
  if ('customElements' in window) {
  } else if ('registerElement' in document) {
  } else {
     // add a MutationObserver to the document

function registerRootV1 (name) {  
  function RootElement () {
    return Reflect.construct(HTMLElement, [], RootElement)
  const proto = RootElement.prototype
  Object.setPrototypeOf(proto, HTMLElement.prototype)
  Object.setPrototypeOf(RootElement, HTMLElement)
  proto.connectedCallback = connectedCallback
  proto.disconnectedCallback = disconnectedCallback
  customElements.define(name, RootElement)

function registerRootV0 (name) {  
  const proto = Object.create(HTMLElement)
  proto.attachedCallback = connectedCallback
  proto.detachedCallback = disconnectedCallback
  document.registerElement(name, { prototype: proto })

function connectedCallback (elem) {  
  // add a MutationObserver to the root element

function disconnectedCallback (elem) {  
// remove the MutationObserver from the root element

This provides a performance benefit for modern browsers, as they only have to deal with a minimal set of DOM mutations.


All-in-all it would be easy to refactor NX to use no Custom Elements without a big performance impact, but they still add a nice boost for certain use cases. What I would need from them to be really useful though is a flexible low-level API and a greater variety of synchronous lifecycle hooks.

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX core's source code in this Github repo.

I hope you found this a good read, see you next time when I’ll discuss client-side routing!

If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript Framework - Data Binding with ES6 Proxies

This is the fifth chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain how to create a simple, yet powerful data binding library with the new ES6 Proxies.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies (current chapter)
  6. Custom elements
  7. Client-side routing


ES6 made JavaScript a lot more elegant, but the bulk of new features are just syntactic sugar. Proxies are one of the few non polyfillable additions. If you are not familiar with them, please take a quick look at the MDN Proxy docs before going on.

"#ES6 made #JavaScript a lot more elegant. Proxies are one of the few non polyfillable additions." via @RisingStack

Click To Tweet

Having a basic knowledge of the ES6 Reflection API and Set, Map and WeakMap objects will also be helpful.

The nx-observe library

nx-observe is a data binding solution in under 140 lines of code. It exposes the observable(obj) and observe(fn) functions, which are used to create observable objects and observer functions. An observer function automatically executes when an observable property used by it changes. The example below demonstrates this.

// this is an observable object
const person = observable({name: 'John', age: 20})

function print () {  
  console.log(`${}, ${person.age}`)

// this creates an observer function
// outputs 'John, 20' to the console

// outputs 'Dave, 20' to the console
setTimeout(() => = 'Dave', 100)

// outputs 'Dave, 22' to the console
setTimeout(() => person.age = 22, 200)  

The print function passed to observe() reruns every time or person.age changes. print is called an observer function.

If you are interested in a few more examples, please check the GitHub readme or the NX home page for a more lifelike scenario.

Implementing a simple observable

In this section, I am going to explain what happens under the hood of nx-observe. First, I will show you how changes to an observable's properties are detected and paired with observers. Then I will explain a way to run the observer functions triggered by these changes.

Registering changes

Changes are registered by wrapping observable objects into ES6 Proxies. These proxies seamlessly intercept get and set operations with the help of the Reflection API.

The variables currentObserver and queueObserver() are used in the code below, but will only be explained in the next section. For now, it is enough to know that currentObserver always points to the currently executing observer function, and queueObserver() is a function that queues an observer to be executed soon.

/* maps observable properties to a Set of
observer functions, which use the property */  
const observers = new WeakMap()

/* points to the currently running 
observer function, can be undefined */  
let currentObserver

/* transforms an object into an observable 
by wrapping it into a proxy, it also adds a blank  
Map for property-observer pairs to be saved later */  
function observable (obj) {  
  observers.set(obj, new Map())
  return new Proxy(obj, {get, set})

/* this trap intercepts get operations,
it does nothing if no observer is executing  
at the moment */  
function get (target, key, receiver) {  
  const result = Reflect.get(target, key, receiver)
   if (currentObserver) {
     registerObserver(target, key, currentObserver)
  return result

/* if an observer function is running currently,
this function pairs the observer function  
with the currently fetched observable property  
and saves them into the observers Map */  
function registerObserver (target, key, observer) {  
  let observersForKey = observers.get(target).get(key)
  if (!observersForKey) {
    observersForKey = new Set()
    observers.get(target).set(key, observersForKey)

/* this trap intercepts set operations,
it queues every observer associated with the  
currently set property to be executed later */  
function set (target, key, value, receiver) {  
  const observersForKey = observers.get(target).get(key)
  if (observersForKey) {
  return Reflect.set(target, key, value, receiver)

The get trap does nothing if currentObserver is not set. Otherwise, it pairs the fetched observable property and the currently running observer and saves them into the observers WeakMap. Observers are saved into a Set per observable property. This ensures that there are no duplicates.

The set trap is retrieving all the observers paired with the modified observable property and queues them for later execution.

You can find a figure and a step-by-step description explaining the nx-observe example code below.

JavaScript data binding with es6 proxy - observable code sample

  1. The person observable object is created.
  2. currentObserver is set to print.
  3. print starts executing.
  4. is retrieved inside print.
  5. The proxy get trap on person is invoked.
  6. The observer Set belonging to the (person, name) pair is retrieved by observers.get(person).get('name').
  7. currentObserver (print) is added to the observer Set.
  8. Step 4-7 are executed again with person.age.
  9. ${}, ${person.age} is printed to the console.
  10. print finishes executing.
  11. currentObserver is set to undefined.
  12. Some other code starts running.
  13. person.age is set to a new value (22).
  14. The proxy set trap on person is invoked.
  15. The observer Set belonging to the (person, age) pair is retrieved by observers.get(person).get('age').
  16. Observers in the observer Set (including print) are queued for execution.
  17. print executes again.

Running the observers

Queued observers run asynchronously in one batch, which results in superior performance. During registration, the observers are synchronously added to the queuedObservers Set. A Set cannot contain duplicates, so enqueuing the same observer multiple times won't result in multiple executions. If the Set was empty before, a new task is scheduled to iterate and execute all the queued observers after some time.

/* contains the triggered observer functions,
which should run soon */  
const queuedObservers = new Set()

/* points to the currently running observer,
it can be undefined */  
let currentObserver

/* the exposed observe function */
function observe (fn) {  

/* adds the observer to the queue and 
ensures that the queue will be executed soon */  
function queueObserver (observer) {  
  if (queuedObservers.size === 0) {

/* runs the queued observers,
currentObserver is set to undefined in the end */  
function runObservers () {  
  try {
  } finally {
    currentObserver = undefined

/* sets the global currentObserver to observer, 
then executes it */  
function runObserver (observer) {  
  currentObserver = observer

The code above ensures that whenever an observer is executing, the global currentObserver variable points to it. Setting currentObserver 'switches' the get traps on, to listen and pair currentObserver with all the observable properties it uses while executing.

Building a dynamic observable tree

So far our model works nicely with single level data structures but requires us to wrap every new object-valued property in an observable by hand. For example, the code below would not work as expected.

const person = observable({data: {name: 'John'}})

function print () {  

// outputs 'John' to the console

// does nothing
setTimeout(() => = 'Dave', 100)  

In order to make this code work, we would have to replace observable({data: {name: 'John'}}) with observable({data: observable({name: 'John'})}). Fortunately we can eliminate this inconvenience by modifying the get trap a little bit.

function get (target, key, receiver) {  
  const result = Reflect.get(target, key, receiver)
  if (currentObserver) {
    registerObserver(target, key, currentObserver)
    if (typeof result === 'object') {
      const observableResult = observable(result)
      Reflect.set(target, key, observableResult, receiver)
      return observableResult
  return result

The get trap above wraps the returned value into an observable proxy before returning it - in case it is an object. This is perfect from a performance point of view too, since observables are only created when they are really needed by an observer.

Comparison with an ES5 technique

A very similar data binding technique can be implemented with ES5 property accessors (getter/setter) instead of ES6 Proxies. Many popular libraries use this technique, for example MobX and Vue. Using proxies over accessors has two main advantages and a major disadvantage.

Expando properties

Expando properties are dynamically added properties in JavaScript. The ES5 technique does not support expando properties since accessors have to be predefined per property to be able to intercept operations. This is a technical reason why central stores with a predefined set of keys are trending nowadays.

On the other hand, the Proxy technique does support expando properties, since proxies are defined per object and they intercept operations for every property of the object.

A typical example where expando properties are crucial is with using arrays. JavaScript arrays are pretty much useless without the ability to add or remove items from them. ES5 data binding techniques usually hack around this problem by providing custom or overwritten Array methods.

Getters and setters

Libraries using the ES5 method provide 'computed' bound properties by some special syntax. These properties have their native equivalents, namely getters and setters. However the ES5 method uses getters/setters internally to set up the data binding logic, so it can not work with property accessors.

Proxies intercept every kind of property access and mutation, including getters and setters, so this does not pose a problem for the ES6 method.

The disadvantage

The big disadvantage of using Proxies is browser support. They are only supported in the most recent browsers and the best parts of the Proxy API are non polyfillable.

A few notes

The data binding method introduced here is a working one, but I made some simplifications to make it digestible. You can find a few notes below about the topics I left out because of this simplification.

Cleaning up

Memory leaks are nasty. The code introduced here avoids them in a sense, as it uses a WeakMap to save the observers. This means that the observers associated with an observable are garbage collected together with the observable.

However, a possible use case could be a central, durable store with a frequently shifting DOM around it. In this case, DOM nodes should release all of their registered observers before they are garbage collected. This functionality is left out of the example, but you can check how the unobserve() function is implemented in the nx-observe code.

Double wrapping with Proxies

Proxies are transparent, meaning there is no native way of determining if something is a Proxy or a plain object. Moreover, they can be nested infinitely, so without necessary precaution, we might end up wrapping an observable again and again.

There are many clever ways to make a Proxy distinguishable from normal objects, but I left it out of the example. One way would be to add a Proxy to a WeakSet named proxies and check for inclusion later. If you are interested in how nx-observe implements the isObservable() method, please check the code.


nx-observe also works with prototypal inheritance. The example below demonstrates what does this mean exactly.

const parent = observable({greeting: 'Hello'})  
const child = observable({subject: 'World!'})  
Object.setPrototypeOf(child, parent)

function print () {  
  console.log(`${child.greeting} ${child.subject}`)

// outputs 'Hello World!' to the console

// outputs 'Hello There!' to the console
setTimeout(() => child.subject = 'There!')

// outputs 'Hey There!' to the console
setTimeout(() => parent.greeting = 'Hey', 100)

// outputs 'Look There!' to the console
setTimeout(() => child.greeting = 'Look', 200)  

The get operation is invoked for every member of the prototype chain until the property is found, so the observers are registered everywhere they could be needed.

There are some edge cases caused by the little-known fact that set operations also walk the prototype chain (quite sneakily), but these won't be covered here.

Internal properties

Proxies also intercept 'internal property access'. Your code probably uses many internal properties that you usually don't even think about. Some keys for such properties are the well-known Symbols for example. Properties like these are usually correctly intercepted by Proxies, but there are a few buggy cases.

Asynchronous nature

The observers could be run synchronously when the set operation is intercepted. This would provide several advantages like less complexity, predictable timing and nicer stack traces, but it would also cause a big mess for certain scenarios.

Imagine pushing 1000 items to an observable array in a single loop. The array length would change a 1000 times and the observers associated with it would also execute a 1000 times in quick succession. This means running the exact same set of functions a 1000 times, which is rarely a useful thing.

Another problematic scenario would be two-way observations. The below code would start an infinite cycle if observers ran synchronously.

const observable1 = observable({prop: 'value1'})  
const observable2 = observable({prop: 'value2'})

observe(() => observable1.prop = observable2.prop)  
observe(() => observable2.prop = observable1.prop)  

For these reasons nx-observe queues observers without duplicates and executes them in one batch as a microtask to avoid FOUC. If you are unfamiliar with the concept of a microtask, please check my previous article about timing in the browser.

Data binding with ES6 Proxies - the Conclusion

If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository and the nx-observe source code in this Github repository.

I hope you found this a good read, see you next time when weI’ll discuss custom HTML Elements!

If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript Framework - Introduction to Data Binding, beyond Dirty Checking

This is the fourth chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the dirty checking and the accessor data binding techniques and point out their strengths and weaknesses.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction (current chapter)
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client-side routing

An introduction to data binding

Data binding is a general technique that binds data sources from the provider and consumer together and synchronizes them.

This is a general definition, which outlines the common building blocks of data binding techniques.

  • A syntax to define the provider and the consumer.
  • A syntax to define which changes should trigger synchronization.
  • A way to listen to these changes on the provider.
  • A synchronizing function that runs when these changes happen. I will call this function the handler() from now on.

The above steps are implemented in different ways by the different data binding techniques. The upcoming sections will be about two such techniques, namely dirty checking and the accessor method. Both has their strengths and weaknesses, which I will briefly discuss after introducing them.

Dirty checking

Dirty checking is probably the most well-known data binding method. It is simple in concept, and it doesn't require complex language features, which makes it a nice candidate for legacy usage.

The syntax

Defining the provider and the consumer doesn't require any special syntax, just plain Javascript objects.

const provider = {  
  message: 'Hello World'
const consumer = document.createElement('p')  

Synchronization is usually triggered by property mutations on the provider. Properties, which should be observed for changes must be explicitly mapped with their handler().

observe(provider, 'message', message => {  
  consumer.innerHTML = message

The observe() function simply saves the (provider, property) -> handler mapping for later use.

function observe (provider, prop, handler) {  
  provider._handlers[prop] = handler

With this, we have a syntax for defining the provider and the consumer and a way to register handler() functions for property changes. The public API of our library is ready, now comes the internal implementation.

Listening on changes

Dirty checking is called dirty for a reason. It runs periodical checks instead of listening on property changes directly. Let's call this check a digest cycle from now on. A digest cycle iterates through every (provider, property) -> handler entry added by observe() and checks if the property value changed since the last iteration. If it did change, it runs the handler() function. A simple implementation would look like below.

function digest () {  

function digestProvider (provider) {  
  for (let prop in provider._handlers) {
    if (provider._prevValues[prop] !== provider[prop]) {
      provider._prevValues[prop] = provider[prop]

The digest() function needs to be run from time to time to ensure a synchronized state.

The accessor technique

The accessor technique is the now trending one. It is a bit less widely supported as it requires the ES5 getter/setter functionality, but it makes up for this in elegance.

The syntax

Defining the provider requires special syntax. The plain provider object has to be passed to the observable() function, which transforms it into an observable object.

const provider = observable({  
  greeting: 'Hello',
  subject: 'World'
const consumer = document.createElement('p')  

This small inconvenience is more than compensated by the simple handler() mapping syntax. With dirty checking, we would have to define every observed property explicitly like below.

observe(provider, 'greeting', greeting => {  
  consumer.innerHTML = greeting + ' ' + provider.subject

observe(provider, 'subject', subject => {  
  consumer.innerHTML = provider.greeting + ' ' + subject

This is verbose and clumsy. The accessor technique can automatically detect the used provider properties inside the handler() function, which allows us to simplify the above code.

observe(() => {  
  consumer.innerHTML = provider.greeting + ' ' + provider.subject

The implementation of observe() is different from the dirty checking one. It just executes the passed handler() function and flags it as the currently active one while it is running.

let activeHandler

function observe(handler) {  
  activeHandler = handler
  activeHandler = undefined

Note that we exploit the single-threaded nature of JavaScript here by using the single activeHandler variable to keep track of the currently running handler() function.

Listening on changes

This is where the 'accessor technique' name comes from. The provider is augmented with getters/setters, which do the heavy lifting in the background. The idea is to intercept the get/set operations of the provider properties in the following way.

  • get: If there is an activeHandler running, save the (provider, property) -> activeHandler mapping for later use.
  • set: Run all handler() functions, which are mapped with the (provide, property) pair.

The accessor data binding technique.

The following code demonstrates a simple implementation of this for a single provider property.

function observableProp (provider, prop) {  
  const value = provider[prop]
  Object.defineProperty(provider, prop, {
    get () {
      if (activeHandler) {
        provider._handlers[prop] = activeHandler
      return value
    set (newValue) {
      value = newValue
      const handler = obj._handlers[prop]
      if (handler) {
        activeHandler = handler
        activeHandler = undefined

The observable() function mentioned in the previous section walks the provider properties recursively and converts all of them into observables with the above observableProp() function.

function observable (provider) {  
  for (let prop in provider) {
    observableProp(provider, prop)
    if (typeof provider[prop] === 'object') {

This is a very simple implementation, but it is enough for a comparison between the two techniques.

Comparison of the techniques

In this section, I will briefly outline the strengths and weaknesses of dirty checking and the accessor technique.


Dirty checking requires no syntax to define the provider and consumer, but mapping the (provider, property) pair with the handler() is clumsy and not flexible.

The accessor technique requires the provider to be wrapped by the observable() function, but the automatic handler() mapping makes up for this. For large projects with data binding, it is a must have feature.


Dirty checking is notorious for its bad performance. It has to check every (provider, property) -> handler entry possibly multiple times during every digest cycle. Moreover, it has to grind even when the app is idle, since it can't know when the property changes happen.

The accessor method is faster, but performance could be unnecessarily degraded in case of big observable objects. Replacing every property of the provider by accessors is usually an overkill. A solution would be to build the getter/setter tree dynamically when needed, instead of doing it ahead in one batch. Alternatively, a simpler solution is wrapping the unneeded properties with a noObserve() function, that tells observable() to leave that part untouched. This sadly introduces some extra syntax.


Dirty checking naturally works with both expando (dynamically added) and accessor properties.

The accessor technique has a weak spot here. Expando properties are not supported because they are left out of the initial getter/setter tree. This causes issues with arrays for example, but it can be fixed by manually running observableProp() after adding a new property. Getter/setter properties are neither supported since accessors can't be wrapped by accessors again. A common workaround for this is using a computed() function instead of a getter. This introduces even more custom syntax.

Timing alternatives

Dirty checking doesn't give us much freedom here since we have no way of knowing when the actual property changes happen. The handler() functions can only be executed asynchronously, by running the digest() cycle from time to time.

Getters/setters added by the accessor technique are triggered synchronously, so we have a freedom of choice. We may decide to run the handler() right away, or save it in a batch that is executed asynchronously later. The first approach gives us the advantage of predictability, while the latter allows for performance enhancements by removing duplicates.

About the next article

In the next article, I will introduce the nx-observe data binding library and explain how to replace ES5 getters/setters by ES6 Proxies to eliminate most of the accessor technique's weaknesses.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss data binding with ES6 Proxies!

If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript framework - Sandboxed Code Evaluation

This is the third chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of evaluating code in the browser and the issues they cause. I will also introduce a method, which relies on some new or lesser known JavaScript features.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing
  3. Sandboxed code evaluation (current chapter)
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client-side routing

The evil eval

The eval() function evaluates JavaScript code represented as a string.

A common solution for code evaluation is the eval() function. Code evaluated by eval() has access to closures and the global scope, which leads to a security issue called code injection and makes eval() one of the most notorious features of JavaScript.

Despite being frowned upon, eval() is very useful in some situations. Most modern front-end frameworks require its functionality but don't dare to use it because of the issue mentioned above. As a result, many alternative solutions emerged for evaluating strings in a sandbox instead of the global scope. The sandbox prevents the code from accessing secure data. Usually it is a simple JavaScript object, which replaces the global object for the evaluated code.

The common way

The most common eval() alternative is complete re-implementation - a two-step process, which consists of parsing and interpreting the passed string. First the parser creates an abstract syntax tree, then the interpreter walks the tree and interprets it as code inside a sandbox.

This is a widely used solution, but it is arguably too heavy for such a simple thing. Rewriting everything from scratch instead of patching eval() introduces a lot of bug opportunities and it requires frequent modifications to follow the latest language updates as well.

An alternative way

NX tries to avoid re-implementing native code. Evaluation is handled by a tiny library that uses some new or lesser known JavaScript features.

This section will progressively introduce these features and use them to explain the nx-compile code evaluation library. The library has a function called compileCode(), which works like below.

const code = compileCode('return num1 + num2')

// this logs 17 to the console
console.log(code({num1: 10, num2: 7}))

const globalNum = 12  
const otherCode = compileCode('return globalNum')

// global scope access is prevented
// this logs undefined to the console
console.log(otherCode({num1: 2, num2: 3}))  

By the end of this article, we will implement the compileCode() function in less than 20 lines.

new Function()

The Function constructor creates a new Function object. In JavaScript, every function is actually a Function object.

The Function constructor is an alternative to eval(). new Function(...args, 'funcBody') evaluates the passed 'funcBody' string as code and returns a new function that executes that code. It differs from eval() in two major ways.

  • It evaluates the passed code just once. Calling the returned function will run the code without re-evaluating it.
  • It doesn't have access to local closure variables, however, it can still access the global scope.
function compileCode (src) {  
  return new Function(src)

new Function() is a better alternative to eval() for our use case. It has superior performance and security, but global scope access still has to be prevented to make it viable.

The 'with' keyword

The with statement extends the scope chain for a statement.

with is a lesser known keyword in JavaScript. It allows a semi-sandboxed execution. The code inside a with block tries to retrieve variables from the passed sandbox object first, but if it doesn't find it there, it looks for the variable in the closure and global scope. Closure scope access is prevented by new Function() so we only have to worry about the global scope.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  return new Function('sandbox', src)

with uses the in operator internally. For every variable access inside the block, it evaluates the variable in sandbox condition. If the condition is truthy, it retrieves the variable from the sandbox. Otherwise, it looks for the variable in the global scope. By fooling with to always evaluate variable in sandbox as truthy, we could prevent it from accessing the global scope.

Sandboxed code evaluation: Simple 'with' statement

ES6 proxies

The Proxy object is used to define custom behavior for fundamental operations like property lookup or assignment.

An ES6 Proxy wraps an object and defines trap functions, which may intercept fundamental operations on that object. Trap functions are invoked when an operation occurs. By wrapping the sandbox object in a Proxy and defining a has trap, we can overwrite the default behavior of the in operator.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    const sandboxProxy = new Proxy(sandbox, {has})
    return code(sandboxProxy)

// this trap intercepts 'in' operations on sandboxProxy
function has (target, key) {  
  return true

The above code fools the with block. variable in sandbox will always evaluate to true because the has trap always returns true. The code inside the with block will never try to access the global object.

Sandboxed code evaluation: 'with' statement and proxies


A symbol is a unique and immutable data type and may be used as an identifier for object properties.

Symbol.unscopables is a well-known symbol. A well-known symbol is a built-in JavaScript Symbol, which represents internal language behavior. Well-known symbols can be used to add or overwrite iteration or primitive conversion behavior for example.

The Symbol.unscopables well-known symbol is used to specify an object value of whose own and inherited property names are excluded from the 'with' environment bindings.

Symbol.unscopables defines the unscopable properties of an object. Unscopable properties are never retrieved from the sandbox object in with statements, instead they are retrieved straight from the closure or global scope. Symbol.unscopables is a very rarely used feature. You can read about the reason it was introduced on this page.

Sandboxed code evaluation: 'with' statement and proxies. A security issue.

We can fix above issue by defining a get trap on the sandbox Proxy, which intercepts Symbol.unscopables retrieval and always returns undefined. This will fool the with block into thinking that our sandbox object has no unscopable properties.

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    const sandboxProxy = new Proxy(sandbox, {has, get})
    return code(sandboxProxy)

function has (target, key) {  
  return true

function get (target, key) {  
  if (key === Symbol.unscopables) return undefined
  return target[key]

Sandboxed code evaluation: 'with' statement and proxies. Has and get traps.

WeakMaps for caching

The code is now secure, but its performance can be still upgraded, since it creates a new Proxy on every invocation of the returned function. This can be prevented by caching and using the same Proxy for every function call with the same sandbox object.

A proxy belongs to a sandbox object, so we could simply add the proxy to the sandbox object as a property. However, this would expose our implementation details to the public, and it wouldn't work in case of an immutable sandbox object frozen with Object.freeze(). Using a WeakMap is a better alternative in this case.

The WeakMap object is a collection of key/value pairs in which the keys are weakly referenced. The keys must be objects, and the values can be arbitrary values.

A WeakMap can be used to attach data to an object without directly extending it with properties. We can use WeakMaps to indirectly add the cached Proxies to the sandbox objects.

const sandboxProxies = new WeakMap()

function compileCode (src) {  
  src = 'with (sandbox) {' + src + '}'
  const code = new Function('sandbox', src)

  return function (sandbox) {
    if (!sandboxProxies.has(sandbox)) {
      const sandboxProxy = new Proxy(sandbox, {has, get})
      sandboxProxies.set(sandbox, sandboxProxy)
    return code(sandboxProxies.get(sandbox))

function has (target, key) {  
  return true

function get (target, key) {  
  if (key === Symbol.unscopables) return undefined
  return target[key]

This way only one Proxy will be created per sandbox object.

Final notes

The above compileCode() example is a working sandboxed code evaluator in just 19 lines of code. If you would like to see the full source code of the nx-compile library, you can find it in this Github repository.

Apart from explaining code evaluation, the goal of this chapter was to show how new ES6 features can be used to alter the existing ones, instead of re-inventing them. I tried to demonstrate the full power of Proxies and Symbols through the examples.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss data binding!

If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript framework - Execution timing, beyond setTimeout

This is the second chapter of the Writing a JavaScript framework series. In this chapter, I am going to explain the different ways of executing asynchronous code in the browser. You will read about the event loop and the differences between timing techniques, like setTimeout and Promises.

The series is about an open-source client-side framework, called NX. During the series, I explain the main difficulties I had to overcome while writing the framework. If you are interested in NX please visit the home page.

The series includes the following chapters:

  1. Project structuring
  2. Execution timing (current chapter)
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client-side routing

Async code execution

Most of you are probably familiar with Promise, process.nextTick(), setTimeout() and maybe requestAnimationFrame() as ways of executing asynchronous code. They all use the event loop internally, but they behave quite differently regarding precise timing.

In this chapter, I will explain the differences, then show you how to implement a timing system that a modern framework, like NX requires. Instead of reinventing the wheel we will use the native event loop to achieve our goals.

The event loop

The event loop is not even mentioned in the ES6 spec. JavaScript only has jobs and job queues on its own. A more complex event loop is specified separately by NodeJS and the HTML5 spec. Since this series is about the front-end I will explain the latter one here.

The event loop is called a loop for a reason. It is infinitely looping and looking for new tasks to execute. A single iteration of this loop is called a tick. The code executed during a tick is called a task.

while (eventLoop.waitForTask()) {  

Tasks are synchronous pieces of code that may schedule other tasks in the loop. An easy programmatic way to schedule a new task is setTimeout(taskFn). However, tasks may come from several other sources like user events, networking or DOM manipulation.

Execution timing: Event loop with tasks

Task queues

To complicate things a bit, the event loop can have multiple task queues. The only two restrictions are that events from the same task source must belong to the same queue and tasks must be processed in insertion order in every queue. Apart from these, the user agent is free to do as it wills. For example, it may decide which task queue to process next.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

With this model, we loose precise control over timing. The browser may decide to totally empty several other queues before it gets to our task scheduled with setTimeout().

Execution timing: Event loop with task queues

The microtask queue

Fortunately, the event loop also has a single queue called the microtask queue. The microtask queue is completely emptied in every tick after the current task finished executing.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {

The easiest way to schedule a microtask is Promise.resolve().then(microtaskFn). Microtasks are processed in insertion order, and since there is only one microtask queue, the user agent can't mess with us this time.

Moreover, microtasks can schedule new microtasks that will be inserted in the same queue and processed in the same tick.

Execution timing: Event loop with microtask queue


The last thing missing is the rendering schedule. Unlike event handling or parsing, rendering is not done by separate background tasks. It is an algorithm that may run at the end of every loop tick.

The user agent has a lot of freedom again: It may render after every task, but it may decide to let hundreds of tasks execute without rendering.

Fortunately, there is requestAnimationFrame(), that executes the passed function right before the next render. Our final event loop model looks like this.

while (eventLoop.waitForTask()) {  
  const taskQueue = eventLoop.selectTaskQueue()
  if (taskQueue.hasNextTask()) {

  const microtaskQueue = eventLoop.microTaskQueue
  while (microtaskQueue.hasNextMicrotask()) {

  if (shouldRender()) {

Execution timing: Event loop with rendering

Now let’s use all this knowledge to build a timing system!

Using the event loop

As most modern frameworks, NX deals with DOM manipulation and data binding in the background. It batches operations and executes them asynchronously for better performance. To time these things right it relies on Promises, MutationObservers and requestAnimationFrame().

The desired timing is this:

  1. Code from the developer
  2. Data binding and DOM manipulation reactions by NX
  3. Hooks defined by the developer
  4. Rendering by the user agent

#Step 1

NX registers object mutations with ES6 Proxies and DOM mutations with a MutationObserver synchronously (more about these in the next chapters). It delays the reactions as microtasks until step 2 for optimized performance. This delay is done by Promise.resolve().then(reaction) for object mutations, and handled automatically by the MutationObserver as it uses microtasks internally.

#Step 2

The code (task) from the developer finished running. The microtask reactions registered by NX start executing. Since they are microtasks they run in order. Note that we are still in the same loop tick.

#Step 3

NX runs the hooks passed by the developer using requestAnimationFrame(hook). This may happen in a later loop tick. The important thing is that the hooks run before the next render and after all data, DOM and CSS changes are processed.

#Step 4

The browser renders the next view. This may also happen in a later loop tick, but it never happens before the previous steps in a tick.

Things to keep in mind

We just implemented a simple but effective timing system on top of the native event loop. It works well in theory, but timing is a delicate thing, and slight mistakes can cause some very strange bugs.

In a complex system, it is important to set up some rules about the timing and keep to them later. For NX I have the following rules.

  1. Never use setTimeout(fn, 0) for internal operations
  2. Register microtasks with the same method
  3. Reserve microtasks for internal operations only
  4. Do not pollute the developer hook execution time window with anything else

#Rule 1 and 2

Reactions on data and DOM manipulation should execute in the order the manipulations happened. It is okay to delay them as long as their execution order is not mixed up. Mixing execution order makes things unpredictable and difficult to reason about.
setTimeout(fn, 0) is totally unpredictable. Registering microtasks with different methods also leads to mixed up execution order. For example microtask2 would incorrectly execute before microtask1 in the example below.


Execution timing: Microtask registration method

#Rule 3 and 4

Separating the time window of the developer code execution and the internal operations is important. Mixing these two would start to cause seemingly unpredictable behavior, and it would eventually force developers to learn about the internal working of the framework. I think many front-end developers have experiences like this already.


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss sandboxed code evaluation!

If you have any thoughts on the topic, please share them in the comments.

Writing a JavaScript Framework - Project Structuring

In the last couple of months Bertalan Miklos, JavaScript engineer at RisingStack wrote a next generation client-side framework, called NX. In the Writing a JavaScript Framework series, Bertalan shares what he learned during the process:

In this chapter, I am going to explain how NX is structured, and how I solved its use case specific difficulties regarding extendibility, dependency injection and private variables.

The series includes the following chapters.

  1. Project structuring (current chapter)
  2. Execution timing
  3. Sandboxed code evaluation
  4. Data binding introduction
  5. Data Binding with ES6 Proxies
  6. Custom elements
  7. Client-side routing

Project Structuring

There is no structure that fits all projects, although there are some general guidelines. Those who are interested can check out our Node.js project structure tutorial from the Node Hero series.

An overview of the NX JavaScript Framework

NX aims to be an open-source community driven project, which is easy to extend and scales well.

  • It has all the features expected from a modern client-side framework.
  • It has no external dependencies, other than polyfills.
  • It consists around 3000 lines altogether.
  • No module is longer than 300 lines.
  • No feature module has more than 3 dependencies.

Its final dependency graph looks like this:

JavaScript Framework in 2016: The NX project structure

This structure provides a solution for some typical framework related difficulties.

  • Extendibility
  • Dependency injection
  • Private variables

Achieving Extendibility

Easy extendibility is a must for community driven projects. To achieve it, the project should have a small core and a predefined dependency handling system. The former ensures that it is understandable, while the latter ensures that it will stay that way.

In this section, I focus on having a small core.

The main feature expected from modern frameworks is the ability to create custom components and use them in the DOM. NX has the single component function as its core, and that does exactly this. It allows the user to configure and register a new component type.


The registered comp-name is a blank component type which can be instantiated inside the DOM as expected.


The next step is to ensure that the components can be extended with new features. To keep both simplicity and extendibility, these new features should not pollute the core. This is where dependency injection comes handy.

Dependency Injection (DI) with Middlewares

If you are unfamiliar with dependency injection, I suggest you to read our article on the topic : Dependency Injection in Node.js.

Dependency injection is a design pattern in which one or more dependencies (or services) are injected, or passed by reference, into a dependent object.

DI removes hard burnt dependencies but introduces a new problem. The user has to know how to configure and inject all the dependencies. Most client-side frameworks have DI containers doing this instead of the user.

A Dependency Injection Container is an object that knows how to instantiate and configure objects.

Another approach is the middleware DI pattern, which is widely used on the server side (Express, Koa). The trick here is that all injectable dependencies (middlewares) have the same interface and can be injected the same way. In this case, no DI container is needed.

I went with this solution to keep simplicity. If you ever used Express the below code will be very familiar.

  .use(paint) // inject paint middleware
  .use(resize) // inject resize middleware

function paint (elem, state, next) {  
  // elem is the component instance, set it up or extend it here = 'red'
  // then call next to run the next middleware (resize)

function resize (elem, state, next) { = '100 px'

Middlewares execute when a new component instance is attached to the DOM and typically extend the component instance with new features. Extending the same object by different libraries leads to name collisions. Exposing private variables deepens this problem and may cause accidental usage by others.

Having a small public API and hiding the rest is a good practice to avoid these.

Handling privacy

Privacy is handled by function scope in JavaScript. When cross-scope private variables are required, people tend to prefix them with _ to signal their private nature and expose them publicly. This prevents accidental usage but doesn't avoid name collisions. A better alternative is the ES6 Symbol primitive.

A symbol is a unique and immutable data type, that may be used as an identifier for object properties.

The below code demonstrates a symbol in action.

const color = Symbol()

// a middleware
function colorize (elem, state, next) {  
  elem[color] = 'red'

Now 'red' is only reachable by owning a reference to the color symbol (and the element). The privacy of 'red' can be controlled by exposing the color symbol to different extents. With a reasonable number of private variables, having a central symbol storage is an elegant solution.

// symbols module
exports.private = {  
  color: Symbol('color from colorize')
exports.public = {}  

And an index.js like below.

// main module
const symbols = require('./symbols')  
exports.symbols = symbols.public  

The storage is accessible inside the project for all modules, but the private part is not exposed to the outside. The public part can be used to expose low-level features to external developers. This prevents accidental usage since the developer has to explicitly require the needed symbol to use it. Moreover, symbol references can not collide like string names, so name collision is impossible.

The points below summarize the pattern for different scenarios.

1. Public variables

Use them normally.

function (elem, state, next) {  
  elem.publicText = 'Hello World!'

2. Private variables

Cross-scope variables, that are private to the project should have a symbol key added to the private symbol registry.

// symbols module
exports.private = {  
  text: Symbol('private text')
exports.public = {}  

And required from it when needed somewhere.

const private = require('symbols').private

function (elem, state, next) {  
  elem[private.text] = 'Hello World!'

3. Semi-private variables

Variables of the low level API should have a symbol key added to the public symbol registry.

// symbols module
exports.private = {  
  text: Symbol('private text')
exports.public = {  
  text: Symbol('exposed text')

And required from it when needed somewhere.

const exposed = require('symbols').public

function (elem, state, next) {  
  elem[exposed.text] = 'Hello World!'


If you are interested in the NX framework, please visit the home page. Adventurous readers can find the NX source code in this Github repository.

I hope you found this a good read, see you next time when I’ll discuss execution timing!

If you have any thoughts on the topic, share it in the comments.