Ferenc Hámori's Picture

Ferenc Hámori

CMO of RisingStack

Budapest 8 posts

This is what Node.js is used for in 2017 - Survey Results

The Node.js Foundation just published the results of a worldwide research which was conducted to understand what Node is used for nowadays, and to identify possible improvements for our favorite open-source framework.

The survey was conducted online from November 30 to January 16, 2017 via a self-administered survey with 1,405 respondents in total. The responses were analyzed by an independent research consultancy.

Let's see what is Node.js used for!

First of all, the survey concludes that...

Sounds fancy! But what does it mean? Well, let's see.

Developers mainly use Node.js on the back-end, but it is popular as a full-stack and front-end solution as well.

How organizations use Node.js

This is no surprise given that one of the main strengths of Node is that you can use the same language on the entire stack.

Therefore, all developers can easily understand what is going on on the other side and make changes if necessary.

The Foundation asked the respondents about what they build with Node.js at the moment.

The results show that Node.js is used primarily to build web applications, but we also see that it is a very popular choice for building enterprise applications too.

The growth of Node.js within companies is a testament to the platform's versatility. It is moving beyond being simply an application platform, and beginning to be used for rapid experimentation with corporate data, application modernization, and IoT solutions. (Source: Forrester Analysis)

What is Node.js used for in 2017

The survey lets us peek into what kind of deployment choices Node developers make. The results show that AWS is the primary deployment location for running Node.js apps in production - but it looks like that on-premises (or self-hosted) infrastructures are extremely popular as well.

Multiple deploy targets for Node.js

This data seems to match with what we at RisingStack measured a year ago via our Node.js survey. The only noticeable difference is that meanwhile a year ago Heroku and DigitalOcean were competing neck-by-neck for Node developers, now it seems that Heroku gained a little advantage.

Node.js Survey - Where developers run their apps

Who uses Node.js?

Since Node.js has LTS (a long-term support plan which focuses on security and stability) since 2015, it's no wonder that huge enterprises constantly add it to their stacks.

Node.js users by Industry

Node didn't just conquer the enterprise sector, but the whole globe too. Collectively, Node.js users span 85+ countries and speak more than 45 languages.

Node.js Global Footprint

It is really interesting to see that according to the survey, the majority of Node developers reside in Europe (41%), not in North America.

Node.js Global reach

Why Developers Love Node.js

Why developers love Node.js

According to the participants of the survey, Node.js increases productivity and application performance in a significant way.

Benefits of using Node.js

Also, it's great to see that the benefits of using Node increase with usage time.

Developers and managers who use Node.js for more than two years praise these positive effects even more.

Node.js beneits increase over time

The survey revealed that big data/business analytics developers and managers are more likely to see major business impacts after instrumenting Node.js into their infrastructure with key benefits being productivity, satisfaction, cost containment, and increased application performance.

The “typical” Node.js user is college educated in his early 30’s with 6-9 years development experience.

According to the "user demographics" panel of the survey, most developers use Node v6 (57%) and spend half of their time with writing code in Node.

Node.js usage profile

The survey also shows us that the majority of developers improve their knowledge with the help of online courses and resources, and it's great to see that NodeSchool is pretty popular as well.

How users learn Node.js

The future of Node.js

As TechCrunch reported a few months ago, Node.js became a leader in the enterprise-grade open source category.

Battery Open Source Index - Node.js & RisingStack

This means that the platform is one of today’s hottest new enterprise technologies. As a result, many big companies — from financial giants to retailers to services firms — are building their businesses around Node.js instead of legacy languages like PHP or Java.

Node.js is a leader according to the BOSS index

One thing is sure:

Learning Node.js

In case you'd like to enhance your Node.js knowledge, we recommend to check out two of our free online courses, and our several ebooks:

Free online guides:

  • Node Hero is a beginner tutorial series focusing on the basics of Node. (13 chapters total)

  • Node.js at Scale is a collection of articles focusing on the needs of companies with bigger Node.js installations, and developers who already learned the basics of Node. (19 chapters total)

Free ebooks:

The 10 Most Important Node.js Articles of 2016

2016 was an exciting year for Node.js developers. I mean - just take a look at this picture:

Every Industry has adopted Node.js

Looking back through the 6-year-long history of Node.js, we can tell that our favorite framework has finally matured to be used by the greatest enterprises, from all around the world, in basically every industry.

Another great news is that Node.js is the biggest open source platform ever - with 15 million+ downloads/month and more than a billion package downloads/week. Contributions have risen to the top as well since now we have more than 1,100 developers who built Node.js into the platform it is now.

To summarize this year, we collected the 10 most important articles we recommend to read. These include the biggest scandals, events, and improvements surrounding Node.js in 2016.

Let's get started!

#1: How one developer broke Node, Babel and thousands of projects in 11 lines of JavaScript

Programmers were shocked looking at broken builds and failed installations after Azer Koçulu unpublished more than 250 of his modules from NPM in March 2016 - breaking thousands of modules, including Node and Babel.

Koçulu deleted his code because one of his modules was called Kik - same as the instant messaging app - so the lawyers of Kik claimed brand infringement, and then NPM took the module away from him.

"This situation made me realize that NPM is someone’s private land where corporate is more powerful than the people, and I do open source because Power To The People." - Azer Koçulu

One of Azer's modules was left-pad, which padded out the lefthand-side of strings with zeroes or spaces. Unfortunately, 1000s of modules depended on it..

You can read the rest of this story in The Register's great article, with updates on the outcome of this event.

#2: Facebook partners with Google to launch a new JavaScript package manager

In October 2016, Facebook & Google launched Yarn, a new package manager for JavaScript.

The reason? There were a couple of fundamental problems with npm for Facebooks’s workflow.

  • At Facebook’s scale npm didn’t quite work well.
  • npm slowed down the company’s continuous integration workflow.
  • Checking all of the modules into a repository was also inefficient.
  • npm is, by design, nondeterministic — yet Facebook’s engineers needed a consistent and reliable system for their DevOps workflow.

Instead of hacking around npm’s limitations, Facebook wrote Yarn from the scratch:

  • Yarn does a better job at caching files locally.
  • Yarn is also able to parallelize some of its operations, which speeds up the install process for new modules.
  • Yarn uses lockfiles and a deterministic install algorithm to create consistent file structures across machines.
  • For security reasons, Yarn does not allow developers who write packages to execute other code that’s needed as part of the install process.

Yarn, which promises to even give developers that don’t work at Facebook’s scale a major performance boost, still uses the npm registry and is essentially a drop-in replacement for the npm client.

You can read the full article with the details on TechCrunch.

#3: Debugging Node.js with Chrome DevTools

New support for Node.js debuggability landed in Node.js master in May.

To use the new debugging tool, you have to

  • nvm install node
  • Run Node with the inspect flag: node --inspect index.js
  • Open the provided URL you got, starting with “chrome-devtools://..”

Read the great tutorial from Paul Irish to get all the features and details right!

#4: How I built an app with 500,000 users in 5 days on a $100 server

Jonathan Zarra, the creator of GoChat for Pokémon GO reached 1 million users in 5 days. Zarra had a hard time paying for the servers (around $4,000 / month) that were necessary to host 1M active users.

He never thought to get this many users. He built this app as an MVP, caring about scalability later. He built it to fail.

Zarra was already talking to VCs to grow and monetize his app. Since he built the app as an MVP, he thought he can care about scalability later.

He was wrong.

Thanks to it's poor design, GoChat was unable to scale to this much users, and went down. A lot of users lost with a lot of money spent.

500,000 users in 5 days on $100/month server

Erik Duindam, the CTO of Unboxd has been designing and building web platforms for hundreds of millions of active users throughout his whole life.

Frustrated by the poor design and sad fate of Zarra's GoChat, Erik decided to build his own solution, GoSnaps: The Instagram/Snapchat for Pokémon GO.

Erik was able to build a scalable MVP with Node.js in 24 hours, which could easily handle 500k unique users.

The whole setup ran on one medium Google Cloud server of $100/month, plus (cheap) Google Cloud Storage for the storage of images - and it was still able to perform exceptionally well.

GoSnap - The Node.js MVP that can Scale

How did he do it? Well, you can read the full story for the technical details:

#5: Getting Started with Node.js - The Node Hero Tutorial Series

The aim of the Node Hero tutorial series is to help novice developers to get started with Node.js and deliver software products with it!

Node Hero - Getting started with Node.js

You can find the full table of contents below:

  1. Getting started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. Deploying Node.js application to a PaaS
  13. Monitoring Node.js Applications

#6: Using RabbitMQ & AMQP for Distributed Work Queues in Node.js

This tutorial helps you to use RabbitMQ to coordinate work between work producers and work consumers.

Unlike Redis, RabbitMQ's sole purpose is to provide a reliable and scalable messaging solution with many features that are not present or hard to implement in Redis.

RabbitMQ is a server that runs locally, or in some node on the network. The clients can be work producers, work consumers or both, and they will talk to the server using a protocol named Advanced Messaging Queueing Protocol (AMQP).

You can read the full tutorial here.

#7: Node.js, TC-39, and Modules

James M Snell, IBM Technical Lead for Node.js attended his first TC-39 meeting in late September.

The reason?

One of the newer JavaScript language features defined by TC-39 — namely, Modules — has been causing the Node.js core team a bit of trouble.

James and Bradley Farias (@bradleymeck) have been trying to figure out how to best implement support for ECMAScript Modules (ESM) in Node.js without causing more trouble and confusion than it would be worth.

ECMAScript modules vs. CommonJS

Because of the complexity of the issues involved, sitting down face to face with the members of TC-39 was deemed to be the most productive path forward.

The full article discusses what they found and understood from this conversation.

#8: The Node.js Developer Survey & its Results

We at Trace by RisingStack conducted a survey during 2016 Summer to find out how developers use Node.js.

The results show that MongoDB, RabbitMQ, AWS, Jenkins, Docker and Amazon Container Services are the go-to choices for developing, containerizing and shipping Node.js applications.

The results also tell Node developers major pain-point: debugging.

Node.js Survey - How do you identify issues in your app? Using logs.

You can read the full article with the Node.js survey results and graphs here.

#9: The Node.js Foundation Pledges to Manage Node Security Issues with New Collaborative Effort

The Node Foundation announced at Node.js Interactive North America that it will oversee the Node.js Security Project which was founded by Adam Baldwin and previously managed by ^Lift.

As part of the Node.js Foundation, the Node.js Security Project will provide a unified process for discovering and disclosing security vulnerabilities found in the Node.js module ecosystem. Governance for the project will come from a working group within the foundation.

The Node.js Foundation will take over the following responsibilities from ^Lift:

  • Maintaining an entry point for ecosystem vulnerability disclosure;
  • Maintaining a private communication channel for vulnerabilities to be vetted;
  • Vetting participants in the private security disclosure group;
  • Facilitating ongoing research and testing of security data;
  • Owning and publishing the base dataset of disclosures, and
  • Defining a standard for the data, which tool vendors can build on top of, and security and vendors can add data and value to as well.

You can read the full article discussing every detail on The New Stack.

#10: The Node.js Maturity Checklist

The Node.js Maturity Checklist gives you a starting point to understand how well Node.js is adopted in your company.

The checklist follows your adoption trough establishing company culture, teaching your employees, setting up your infrastructure, writing code and running the application.

You can find the full Node.js Maturity Checklist here.

Node.js Examples - What Companies Use Node for in 2016

We were amazed to see how much everyone appreciated our previous article which summarized how enterprises use Node.js, so we decided to do a follow up on the subject and write more about well-known companies building software products with Node.

This article on Node.js examples shows how Groupon, Lowe’s Home Improvement and Skycatch have successfully deployed their enterprise applications with Node.js. The source of these case studies is the Node Foundations' Enterprise Conversation series. If you're interested why we joined the Foundation and what are its goals, head over here.

Groupon rebuilt its entire web layer with Node.js

The first participant in Node Foundations Enterprise Conversation series is Adam Geitgey - who's been the Director of Software Engineering for five years at one of the largest e-commerce companies, Groupon.

When he arrived at the company, it was mainly a Ruby on Rails shop, and everything was running as a huge monolithic application. That was working well for a long time, but eventually, it become too hard to maintain, and they seemed to outgrow it.

Besides that, Groupon made a number of acquisitions in the recent years, thus, in addition to their Ruby on Rails stack they ended up with a new Java stack in Europe and a PHP stack in South America.

Groupon felt the need for replacing their current technology stack, so they started to look for a more suitable software platform around 3-4 years ago.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

The reasons for choosing Node

Groupon decided to adopt Node.js for the following reasons:

  • JavaScript is close to the universal languages, so it requires less effort to learn and work with, and the communication is also easy for the developers.
  • The scaling of Node.js applications worked well on tests. Node did not only allow them to unify their development language but also gave them performance improvements in some cases.
  • Node developers can reuse previously written code elements which can be a huge ease from time to time.
  • Node.js was the most uniform platform at Groupon. Even though they used Java for a lot of backend services, the frameworks and ways how Java was used were diverse. This gave them a way to move a large chunk of their software onto one platform in one swoop.

As a result of the decision, the Groupon engineering team rebuilt their entire web layer with Node.js. During the rebuilding process, Adam’s task was to manage the team which developed the platform and the framework which was used by other product teams to build and ship Node apps in production.

The team also released several open-source libraries that they built along the way:

  • gofer, which is an API client library they used to talk to backend services.
  • node cached, which is caching library for Node.js.

Today Groupon is using Node on multiple platforms:

  • Around 3-400 back-end services are running with Node.js, mixed with Java and Ruby.
  • They use Node as an API integration layer.
  • They use it for all of their client apps, including their website.

Currently, Groupon has 70 Node.js apps in production, which are used in 30 countries. Overall, Groupon uses Node.js heavily in the front-end, and here and there for several backend purposes.

The future of Node at Groupon

Regarding the future, they are totally convinced to invest into Node for the web platform. All of their production services are on Node 4 right now, but they are already excited about Node 6, and waiting for the LTS version to come out.

In the past - because Groupon was on Ruby - they have been using CoffeeScript a lot, and it is a great chance for them to finally migrate from CoffeeScript and standardize on plain JavaScript.

Another big project Groupon is working on is moving from a model where developers maintain their own servers to a model where the company provides them with clusters of servers and their apps run on them - more like a Heroku model.

Node.js: the glue of Skycatch

Andre Deutmeyer is the next participant in Node Foundations Enterprise Conversation series. His role is to lead the web infrastructure and development team at Skycatch.

Skycatch is a data company helping to capture, manage and analyze commercial drone data. Skycatch sees the constructions or mining sites as a database that needs to be queried. Existing tools like writing raw SQL queries are hard and time-consuming to create, while Skycatch’s solution makes it easy to extract actionable data from the sites.

Skycatch has small cross-functional teams with 20 developers, and as I already mentioned, Andre’s role is to lead the web, infrastructure and development team. He is involved in architecting and scaling out data processing, while his goal is to deliver the data that you send them reliably and quickly.

What helps them with that? Of course, Node.js, but where do they use it?

“We are using node everywhere you can think of - Node is our glue.”

They use it on their drones, and across their management and iOS apps. Almost their entire backend is running on Node. For all of their data processing, they have a lot of microservices that are constantly communicating with each other and Node is what keeps that going smoothly.

What are the benefits of using Node.js at Skycatch?

Node has a great impact on the development at Skycatch, as Andre says:

“You can’t really put a price on the ability to move fluently from the front-end development into a service architecture style and scaling things is easy because there is no hurdle moving between frontend and backend. It scales much more easily than if we had chosen a different language to run on the servers.”

They have a lot of people who were working on the web, API, and the data processing sides as well. Thus, the developers can figure out during the projects which part of the stack they prefer working on and again; there's not a lot of huge mental hurdle to move from one to the other because the programming language is not a problem.

The future of Node.js at Skycatch

Recently they've been looking at AWS Lambda as it has released support for Node 4. Since then, they've been in a big hurry to start over coding a lot of their smaller services to make use of the infrastructure on AWS Lambda. They are a small team, so they want to focus on the product, not on having to scale the infrastructure, and AWS Lambda is perfect for that.

Lowe’s Home Improvement thinks differently thanks to Node.js

The latest participant in the Node Foundations' Enterprise Conversations series was Rick Adam. He is the manager of the IT application portfolio of digital interfaces at Lowe’s Home Improvement.

His role at Lowe’s is the management of the applications and teams that drive the presentation tier of Lowe’s digital properties. Rick manages a team of 25 developers, including the software architecture team.

Lowe’s history and how they arrived at Node.js

Coming out of the recession-era of 2007-2008, the company started to see that the home market continued to grow and the needed to drive further investments into a digital space.

As new consumer technologies began to come out for smartphones and tablets, the company began to look at Lowe’s Digital not only as a valuable sales channel for the company, but also as a true sales driver.

They began to build the engineering team which consisted of about 2-3 web developers back at that time.

Killing the Monolith

They started to look for a new technology because their application was a big monolithic app, and it was a daunting process to release and introduce any change regardless of how small that might have been.

Since Lowe’s is in the retail business, their number one priority is to drive customers through a journey and to enable them finishing the checkout processes. However, in those days minor things, like a text change on the product list page required the full application to be updated and the monolithic app to be packaged and deployed again - which crippled their ability to move fast.

Finally, the risk and the quality assurance behind doing that became so daunting that their business and IT people weren't comfortable keeping up with the pace that the business required.

Although they've looked at more off-the-shelf software solutions or larger applications to drive their digital property, traditionally it has not been a part of their process even to search for open-source technologies. However, they began to reconsider their application portfolio and to figure out introducing a more open-source software or solution.

Lowe’s digital team was on the frontline, trying to drive their technology forward. They were in the middle of a major re-architecturing and redesign project for www.lowes.com and their mobile site, with the goal of to bringing a new experience to the table.

During that project, they started to take a look at what's the right technology stack that their business and brand needs, which led them to start using Node.js about two years ago.

How Lowe’s profits from adopting Node.js

When they looked at Node.js, it made sense as they had a great team of web developers who were already well skilled in JavaScript. Hence they didn't have to go and find talent or a new skill set.

“We had a great team here, and the application made sense just from how it plays into our target status quo”!

Node is a perfect technology for their web tier for brokering API requests. Also, Lowe’s has seen a lot of growth both from the company itself and from the technology that they are introducing.

“It's been exciting to see the growth and the maturity of our development of acumen and where we are going to take the brand.”

One of the aspects that they liked about Node was the asynchronous model, providing the ability to call multiple services at once. When they all finish, they can then render the result with their microservices model.

“It delivers a one-page experience that calls five different little services and not have to do the traditional waterfall approach.”

Node has been doing great regarding performance, especially at scale. The applications are using fewer resources in Node.js compared to what they would traditionally use in Java to render a page. The reason for that is the small fragmented applications do one page better than a monolithic app.

What has also been ideal for them is the reuse of their front-end developer skills to work with JavaScript on the backend. That is especially useful because traditionally they had a segregated teamwork. Back-end guys were traditionally on Java and their frontend guys work on all the front-end CSS, JavaScript, and HTML.

By going with Node.js, the engineering team was able to take full responsibility of owning the entire stack for UI from the backend trough the view layer, to the actual front-end. They were able to reuse their resources that are well-versed in JavaScript and HTML and make it go into the Node.

Now they can put new features together quickly and even do prototyping to do research and some user testing. Then take that idea to production level and release it without putting the other parts of their application stack at risk. Rick even says:

“Node.js really opened some eyes to the potential here to think differently than we've ever been able to in the past six years.”

Node.js Examples: The Conclusion

As it has been pointed out, companies can benefit a lot from the adoption of Node.js both on the developer and the application level. The latter is especially considerable when it comes to performance and scalability.

"Companies can benefit a lot from Node.js both on the developer and the application level" via @RisingStack #nodejs

Click To Tweet

If you’d like to start learning more, I suggest to check out our Node Hero tutorial series and deliver software products using Node!

Controlling the Node.js security risk of npm dependencies

This article is a guest post from Guy Podjarny, CEO at Snyk, building dev tools to fix known vulnerabilities in open source components

Open source packages - and npm specifically - are undoubtedly awesome. They make developers extremely productive by giving each of us a wealth of existing functionality just waiting to be consumed. If we had to write all this functionality ourselves, we'd struggle to create a fraction of what we do today.

As a result, a typical Node.js application today consumes LOTS of npm packages, often hundreds or thousands of them. What we often overlook, however, is that each of these packages, alongside its functionality, also pulls in its Node.js security risks. Many packages open new ports, thus increasing the attack surface. Roughly 76% of Node shops use vulnerable packages, some of which are extremely severe; and open source projects regularly grow stale, neglecting to fix security flaws.

Inevitably, using npm packages will expose you to security risks. Fortunately, there are several questions you can ask which can reduce your risk substantially. This post outlines these questions, and how to get them answered.

#1: Which packages am I using?

The more packages you use, the higher the risk of having a vulnerable or malicious package amongst them. This holds true not only for the packages you use directly but also for the indirect dependencies they use.

Discovering your dependencies is as easy as running npm ls in your application’s parent folder, which lists the packages you use. You can use the --prod argument to only display production dependencies (which impact your security the most), and add --long to get a short description of each package. Check out this post to better understand how you can slice and dice your npm depedencies.

~/proj/node_redis $ npm ls --prod --long
[email protected]  
│ /Users/guypod/localproj/playground/node_redis
│ Redis client library
│ git://github.com/NodeRedis/node_redis.git
│ https://github.com/NodeRedis/node_redis
├── [email protected]
│   Extremely fast double-ended queue implementation
│   git://github.com/petkaantonov/deque.git
│   https://github.com/petkaantonov/deque
├── [email protected]
│   Redis commands
│   git+https://github.com/NodeRedis/redis-commands.git
│   https://github.com/NodeRedis/redis-commonds
└── [email protected]
    Javascript Redis protocol (RESP) parser

Figure: Inventorying Node’s redis few dependencies

A new crop of Dependency Management services, such as bitHound and VersionEye, can also list the dependencies you use, as well as track some of the information below.

"The more #npm packages you use, the higher the risk of having a vulnerable or malicious one." via @RisingStack

Click To Tweet

Now that you know what you have, you can ask a few questions to assess the risk each package involves. Below are a few examples of questions you should ask, why you should ask them, and suggestions on how you can get them answered.

See the communication between your Node.js services - check out Trace by RisingStack!

#2: Am I still using this package?

As time goes by and your code changes, you’re likely to stop using certain packages and add new ones instead. However, developers don’t typically remove a package from the project when they stop using it, as some other part of the code may need it.

As a result, projects have a tendency to accumulate unused dependencies. While not directly a security concern, these dependencies unnecessarily grow your attack surface and add clutter to the code. For instance, an attacker may trick one package into loading an unused package with a more severe vulnerability, escalating the potential damage.

"Projects have a tendency to accumulate unused dependencies and quickly grow your attack surface!" via @RisingStack

Click To Tweet

Checking for unused dependencies is most easily done using the depcheck tool. depcheck scans your code for requires and import commands, correlate those with the packages installed or mentioned in your package.json, and provide a report. The command can be tweaked in various ways using command flags, making it easy to automate checking for unused dep’s.

~/proj/Hardy $ depcheck
Unused dependencies  
* cucumber
* selenium-standalone
Unused devDependencies  
* jasmine-node

Figure: Checking for unused dependencies on the Hardy project

#3: Are other developers using this package?

Packages used by many are also more closely watched. The likelihood of someone having already encountered and addressed a security issue on them is higher than in a less utilized package.

For example, the secure-compare package was created to support string comparison that was not susceptible to a timing attack. However, a fundamental flaw in the package led to achieving the exact opposite, making certain comparisons extremely time sensitive (and incorrect).

If you looked more closely, you’d see this package is very lightly used, downloaded only 20 times a day. If this were a more popular package, odds are someone would have found and reported the functional flaw sooner.

The easiest way to assess package usage is its download rate, indicated in the “Stats” section of the npm’s package page. You can extract those stats automatically using the npm stats API, or browse historic stats on npm-stat.com. Alternatively, you can look at the number of “Dependent” packages – other packages that use the current one.

Node.js Security: Download stats for the `redis` package

#4: Am I using the latest version of this package?

Bugs, including security bugs, are constantly found and – hopefully – fixed. Also, it’s quite common to see newly reported vulnerabilities fixed only on the newest major branch of a project.

For instance, in early 2016, a Regular Expression Denial of Service (ReDoS) vulnerability was reported on the HMAC package hawk. ReDoS is a vulnerability where a long or carefully crafted input causes a regular expression match to take a very long time to compute. The processing thread does not serve new requests in the meantime, enabling a denial of service attack with just a small number of requests.

The vulnerability in hawk was quickly fixed in its latest major version stream, 4.x, but left older versions without a fix. Specifically, it left an unfixed vulnerability in the widely used request package, which used [email protected] The author later accepted Snyk’s pull-request with a fix for the 3.x branch, but request users were exposed for a while, and the issue still exists in the older major release branches. This is just one example, but as a general rule, your dependencies are less likely to have security bugs if they’re on the latest version.

You can find out whether or not you’re using the latest version using the npm outdated command. This command also supports the --prod flag to ignore dev dependencies, as well as --json to simplify automation. You can also use Greenkeeper to proactively inform you when you're not using the latest version.

~/proj/handlebars.js $ npm outdated --prod
Package     Current  Wanted  Latest  Location  
async         1.5.2   1.5.2   2.0.1  handlebars  
source-map    0.4.4   0.4.4   0.5.6  handlebars  
uglify-js     2.6.2   2.7.3   2.7.3  handlebars  
yargs        3.32.0  3.32.0   5.0.0  handlebars  

Figure: npm outdated on handlebars prod dependencies

#5: When was this package last updated?

Creating an open source project, including npm packages, is fun. Many talented developers create such projects in their spare time, investing a lot of time and energy in making them good. Over time, however, the excitement often wears off, and life changes can make it hard to find the needed time.

As a result, npm packages often grow stale, not adding features and fixing bugs slowly – if at all. This reality isn’t great for functionality, but it’s especially problematic for security. Functional bugs typically only get in your way when you’re building something new, allowing some leeway for how quickly they’re addressed. Fixing security vulnerabilities is more urgent – once they become known, attackers may exploit them, and so time to fix is critical.

"Fixing security vulnerabilities is urgent – once they become known, attackers will exploit them!" via @RisingStack

Click To Tweet

A good example of this case is a Cross-Site Scripting vulnerability in the marked package. Marked is a popular markdown parsing package, downloaded nearly 2M times a month. Initially released in mid-2011, Marked evolved rapidly over the next couple of years, but the pace slowed in 2014 and work stopped completely in mid-2015.

The XSS vulnerability was disclosed around the same time, and it has remained untouched ever since. The only way to protect yourself from the issue is to stop using marked, or use a Snyk patch, as explained below.

Inspecting your packages for their last update date is a good way to reduce the change you’ll find yourself in such a predicament. You can do so via the npm UI or by running npm view <package> time.modified.

$ npm view marked time.modified

Figure: checking last modified time on marked

#6: How many maintainers do these packages have?

Many npm packages only have a single maintainer, or a very small number of those. While there’s nothing specifically wrong with that, those packages do have a higher risk of getting abandoned. In addition, larger teams are more likely to have at least some members who better understand and care more about security.

Identifying the packages that have only a few maintainers is a good way to assess your risk. Tracking npm maintainers is easily automated by using npm view <pkg> maintainers.

$ npm view express maintainers

[ 'dougwilson <[email protected]>',
  'hacksparrow <[email protected]>',
  'jasnell <[email protected]>',
  'mikeal <[email protected]>' ]

Figure: maintainers of the express package, per npm

However, many packages with a full team behind them publish automatically through a single npm account. Therefore, you’ll do well to also inspect the GitHub repository used to develop this package (the vast majority of npm packages are developed on GitHub). In the example above, you'll find there are 192 contributors to the express repo. Many only made one or two commits, but that's still quite a difference from the 4 listed npm maintainers.

You can find the relevant GitHub repository by running npm view <pkg> repository, and then subsequently run curl https://api.github.com/repos/<repo-user>/<repo-name>/contributors. For instance, for the marked package, you would first run npm view marked repository, and then curl https://api.github.com/repos/chjj/marked/contributors. Alternatively, you can easily see the maintainers, GitHub repository and its contributors via the npm and GitHub web UI.

#7: Does this package have known security vulnerabilities?

The questions above primarily reflect the risk of a future problem. However, your dependencies may be bringing in some security flaws right now! Roughly 15% of packages carry a known vulnerability, in either in their own code or in the dependencies they in turn bring in. According to Snyk's data, about 76% of Node shops use vulnerable dependencies in their applications.

You can easily find such vulnerable packages using Snyk. You can run snyk test in your terminal, or quickly test your GitHub repositories for vulnerable dependencies through the web UI. Snyk's test page holds other testing options.

"15% of #npm packages carry a known vulnerability, 76% of Node shops use vulnerable dependencies!" via @RisingStack

Click To Tweet

Snyk also makes it easy to fix the found issues, using snyk wizard in the terminal or an automated fix pull request. Fixes are done using guided upgrades or open source patches. Snyk creates these patches by back-porting the original fix, and they are stored as part of its Open Source vulnerability database.

Node.js Security: Test git repos with Snyk

Once you're free of vulnerabilities, you should ensure code changes don't make you vulnerable again. If you're using Snyk, you can test if pull requests introduce a vulnerable dependency, or add a test such as snyk test to your build process.

Lastly, when a new vulnerability is disclosed, you'd want to learn about it before attackers do. New vulnerabilities are independently of your code changes, so a CI test is not sufficient. To get an email (and a fix pull request) from Snyk whenever a new vulnerability affects you, click "Watch" on the "Test my Repositories" page, or run snyk monitor when you deploy new code.

Solving Node.js Security

npm packages are amazing, and let us build software at an unprecedented pace. You should definitely keep using npm packages - but there's no reason to do so blindly. We covered 7 questions you can easily answer to understand better, and reduce your security exposure:

  1. Which packages am I using? And for each one...
  2. Am I still using this package?
  3. Are other developers using this package?
  4. Am I using the latest version of this package?
  5. When was this package last updated?
  6. How many maintainers do these packages have?
  7. Does this package have known security vulnerabilities?

Answer those, and you'll be both productive and secure!

If you have any thoughts or questions on the topic, please share them in the comments.

This article is a guest post from Guy Podjarny, CEO at Snyk, building dev tools to fix known vulnerabilities in open source components

Node.js Examples - How Enterprises use Node in 2016

Node.js had an extraordinary year so far: npm already hit 4 million users and processes a billion downloads a week, while major enterprises adopt the language as the main production framework day by day.

The latest example of Node.js ruling the world is the fact that NASA uses it “to build the present and future systems supporting spaceship operations and development.” - according to the recent tweets of Collin Estes - Director of Software Technologies of the Space Agency.

Node.js Examples: Nasa is using it to design Spacewalks

"So, Node.js is used for designing spacewalks - but what else?” via @RisingStack #nodejs #examples @nodejs

Click To Tweet

Fortunately, the Node Foundation’s “Enterprise conversations” project lets us peek into the life of the greatest enterprises and their use cases as well.

This article summarizes how GoDaddy, Netflix, and Capital One uses Node.js in 2016.

GoDaddy ditched .NET to work with Node.js

Charlie Robbins is the Director of Engineering for the UX platform at GoDaddy. He is one of the longest-term users of the technology, since he started to use it shortly after watching Ryan Dahl’s legendary Node.js presentation at JSConf in December 2009 and was one of the founders of Nodejitsu.

His team at GoDaddy uses Node.js for both front-end and back-end projects, and they recently rolled out their global site rebrand in one hour thanks to the help of Node.js.

Before that, the company primarily used .NET and was transitioning to Java. They figured out that despite the fact that Microsoft does a great job supporting .NET developers and they’ve made .NET open source, it doesn’t have a vibrant community of module publishers and they had to rely too much on what Microsoft released.

“The typical .NET scenario is that you wait for Microsoft to come out with something that you can use to do a certain task. You become really good at using that, but the search process for what’s good and what’s bad, it’s just not a skill that you develop.”

Because of this, the company had to develop a new skill: to go out and find all the other parts of the stack. As opposed to other enterprise technologies like .NET where most of the functionality was included in the standard library, they had to become experts in evaluating modules.

Node.js Examples: GoDaddy searching for new modules is a skill they need to learn

GoDaddy started to use Node for the front-end and then ended up using it more in the back-end as well. The same .NET engineers who were writing the back-end code were writing the JavaScript front-end code. The majority of engineers are full stack now.

The most exciting things for Charlie about Node.js are being handled mainly by the working groups.

“I’m very excited about the tracing working group and the things that are going to come out of that to build an open source instrumentation system of eco-tooling.”

Other exciting things for him are the diagnostics working group (previously: inclusivity) and the Node.js Live events - particularly Node.js communities in countries where English is not used. Places like China, for example, where most of the engineers are still primarily speaking Chinese, and there’s a not a lot of crossovers.

“I’m excited to see those barriers start to come down and as those events get to run.”

As of talking about GoDaddy and Node: they have just released the project that they’ve been working on pretty extensively with Cassandra. It was an eight-month long process, and you can read the full story of “Taming Cassandra in Node.js” at the GoDaddy engineering blog.

Need help with enterprise-grade Node.js Development?
Hire the experts of RisingStack!

Netflix scales horizontally thanks to its Node container layer

The next participants in Node Foundations enterprise conversation series are Kim Trott, the director of UI Platform Engineering and Yunong Xiao, Platform Architect from Netflix.

Kim’s been at Netflix for nine years - she just arrived before the company launched its first streaming service. It was the era when you could only watch Netflix with Windows Media Player, and the full catalog consisted only 50 titles.

“I've seen the evolution of Netflix going from DVD and streaming to now being our own content producer.“

Yunong Xiao, who’s well known for being the maintainer of restify arrived two years ago, and just missed the party the company held for reaching 15 million users - but since they are fastly approaching their 100 millionth subscribers, he’ll have a chance to celebrate soon. Yunong previously worked at Joyent on Node.js and distributed systems, and at AWS as well. His role at Netflix is to have Node up and running in scale and making sure it’s performing well.

Kim manages the UI platform team within the UI engineering part of the organization. Their role is to help all the teams building the Netflix application by making them more productive and efficient. This job can cover a wide range of tasks: it could be building libraries that are shared across all of the teams that make it easier to do data access or client side logging, and building things that make easier to run Node applications in production for UI focused teams.

Kim provided us a brief update on how the containerization of the edge services have been going at Netflix - since she talked about it on Node Interactive in last years December.

Node.js Examples: netflix is using Node for the containerization of their edge services When any device or client tries to access Netflix, they have to use something what's called edge services, which is a set of endpoint scripts - a monolithic JVM based system, which lets them mutate and access data. It’s been working really well, but since it’s a monolith, Netflix met some vertical scaling concerns. It was a great opportunity to leverage Node and Docker to be able to scale horizontally all of this data access scripts out.

“Since I’ve spoken at Node Interactive we've made a lot of progress on the project, and we're actually about to run a full system test where we put real production traffic through the new Node container layer to prove out the whole stack and flush out any problems around scaling or memory, so that's really exciting.”

How Node.js affected developer productivity at Netflix?

The developer productivity comes from breaking down the monolith into smaller, much more manageable pieces - and from being able to run them on local machines and do the containerization.

We can effectively guarantee that what you're running locally will very closely mirror what you run in production and that's really beneficial - told Kim.

“Because of the way Node works we can attach debuggers, and set breakpoint steps through the code. If you wanted to debug these groovy scripts in the past, you would make some code changes upload it to the edge layer, run it, see if it breaks, make some more changes, upload it again, and so on..”

It saves us tens of minutes to test, but the real testament to this project is: all of our engineers who are working on the clients are asking: when do we get to use this instead of the current stack? - told Yunong.

The future of Node at Netflix

Over the next few months, the engineering team will move past building out the previously mentioned stack and start working on tooling and performance related problems. Finding better tools for post-mortem debugging is something that they're absolutely passionate about.

They are also planning to be involved in the working groups and help contribute back to the community and so that they can build a better tool that everyone can leverage.

“One of the reasons why Node is so popular is the fact that it's got a really solid suite of tools just to debug, so that's something that we’re actually working contributing on.”

Node.js brings joy for developers at Capital One

Azat Mardan is a technology fellow at Capital One and an expert on Node.js and JavaScript. He’s also the author of the Webapplog.com, and you’ve probably read one of his most popular book: Practical Node.js.

“Most people think of Capital One as a bank and not as a technology company, which it is. At Capital One, and especially this Technology Fellowship program, we bring innovation, so we have really interesting people on my team: Jim Jagielski and Mitch Pirtle. One founded Apache Software Foundation and the other, Joomla!, so I’m just honored to be on this team.”

Azats goal is to bring Node.js to Capital One and to teach Node.js courses internally, as well as to write for the blog, and provide architectural advice. The company has over 5,000 engineers and several teams who started using Node.js at different times.

Capital One uses Node.js for:

  • Hygieia, which is an open-source dashboard for DevOps. It started in 2013 and was announced last year at OSCON, and it has about 900 GitHub stars right now. They’re using Node.js for the frontend and for the build too.
  • Building the orchestration layer. They have three versions of the Enterprise API, and it’s mostly built with Java, but it’s not convenient to use on the front end.

Node.js Examples: Capital One use cases

Capital One uses Angular mostly, but they have a little bit of React as well. In this case, the front-facing single page applications need something to massage and format the data - basically to make multiple codes to the different APIs. Node.js works really great for them for building this orchestration layer.

“It’s a brilliant technology for that piece of the stack because it allows us to use the same knowledge from the front end, to reuse some of the modules, to use the same developers. I think that’s the most widespread use case at Capital One, in terms of Node.js.”

The effect of Node.js on the company

Node.js allows much more transferable skill-sets between the front end and some of the back-end team, and it allows them to be a little bit more integrated.

“When I’m working with the team, and whether it’s Java or C# developers, they’re doubling a little bit on front ends; so they’re not experts but once they switch to the stack where Node.js is used in the back end, they’re more productive because they don’t have that switch of context. I see this pure joy that it brings to them during development because JavaScript it just a fun language that they can use."

From the business perspective: the teams can reuse some of the modules and templates for example, and some of the libraries as well. It’s great from both the developers and from the managerial perspective.

Also, Node has a noticeable effect on the positions and responsibilities of the engineers as well.

Big companies like Capital One will definitely need pure back-end engineers for some of the projects in the future, but more and more teams employ ninjas who can do front-end, back-end, and a little bit of DevOps too - so the teams are becoming smaller.

Instead of two teams, one is a pure back end, and one is a pure front end - consisting seven people overall - a ninja team of five can do both.

“That removes a lot of overhead in communication because now you have fewer people, so you need fewer meetings, and you actually can focus more on the work, instead of just wasting your time.”

The future of Node.js

Node.js has the potential to be the go-to-framework for both startups and big companies, which is a really unique phenomenon - according to Azat.

“I’m excited about this year, actually. I think this year is when Node.js has gone mainstream.”

The Node.js Interactive in December has shown that major companies are supporting Node.js now. IBM said that Node.js and Java are the two languages for the APIs they would be focusing on, so the mainstream adoption of the language is coming, unlike what we’ve seen with Ruby - he told.

“I’m excited about Node.js in general, I see more demand for courses, for books, for different topics, and I think having this huge number of front-end JavaScript developers is just a tremendous advantage in Node.js.”

Start learning Node!

As you can see, adopting Node.js in an enterprise environment has tremendous benefits. It makes the developers happier and increases the productivity of the engineering teams.

If you’d like to start learning it I suggest to check out our Node Hero tutorial series.

Share your thoughts in the comments.

Monitoring Microservices Architectures: Enterprise Best Practices

By reading the following article, you can get insight on how lead engineers at IBM, Financial Times and Netflix think about the pain-points of application monitoring and what are their best practices for maintaining and developing microservices. Also, I’d like to introduce a solution we developed at RisingStack, which aims to tackle the most important issues with monitoring microservices architectures.

Tearing down a monolithic application into a microservices architecture brings tremendous benefits to engineering teams and organizations. New features can be added without rewriting other services. Smaller codebases make development easier and faster, and the parts of an application can be scaled separately.

Unfortunately, migrating to a microservices architecture has its challenges as well since it requires complex distributed systems, where it can be difficult to understand the communication and request flow between the services. Also, monitoring gets increasingly frustrating thanks to a myriad of services generating a flood of unreliable alerts and un-actionable metrics.

Visibility is crucial for IBM with monitoring microservices architectures

Jason McGee, Vice President and Chief Technical Officer of Cloud Foundation Services at IBM let us take a look at the microservice related problems enterprises often face in his highly recommended Dockercon interview with The New Stack.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant microservices applications using Trace
Learn more

For a number of years - according to Jason - developer teams were struggling to deal with the increasing speed and delivery pressures they had to fulfill, but with the arrival of microservices, things have changed.

Migrating from the Monolith to a Microservices Architecture

In a microservices architecture, a complex problem can be broken up into units that are truly independent, so the parts can continue to work separately. The services are decoupled, so people can operate in small groups with less coordination and therefore they can respond more quickly and go faster.

“It’s interesting that a lot of people talk about microservices as a technology when in reality I think it’s more about people, and how people are working together.”

The important thing about microservices for Jason is that anyone can give 5 or 10 people responsibility for a function, and they can manage that function throughout its lifecycle and update it whenever they need to - without having to coordinate with the rest of the world.

“But in technology, everything has a tradeoff, a downside. If you look at microservices at an organization level, the negative trade-off is the great increase in the complexity of operations. You end up with a much more complex operating environment.”

Right now, a lot of activity in the microservices space is about that what kind of tools and management systems teams have to put around their services to make microservices architectures a practical thing to do, said Jason. Teams with microservices have to understand how they want to factor their applications, what approaches they want to take for wiring everything together, and how can they reach the visibility of their services.

The first fundamental problem developers have to solve is how the services are going to find each other. After that, they have to manage complexity by instituting some standardized approach for service discovery. The second biggest problem is about monitoring and bringing visibility to services. Developers have to understand what’s going on, by getting visibility into what is happening in their cloud-based network of services.

Describing this in a simplified manner: an app can have hundreds of services behind the scene, and if it doesn’t work, someone has to figure out what’s going on. When developers just see miles of logs, they are going to have a hard time tracing back a problem to its cause. That’s why people working with microservices need excellent tools providing actionable outputs.

“There is no way a human can map how everyone is talking to everyone, so you need new tools to give you the visibility that you need. That’s a new problem that has to be solved for microservices to became an option.”

At RisingStack, as an enterprise Node.js development and consulting company, we experienced the same problems with microservices since the moment of their conception.

Our frustration of not having proper tools to solve these issues led us to develop our own solution called Trace, a microservice monitoring tool with distributed transaction tracking, error detection, and process monitoring for microservices. Our tool is currently in an open beta stage, therefore it can be used for free.

If you’d like to give it a look, we’d appreciate your feedback on our Node.js monitoring platform.

Financial Times eases the pain of monitoring microservices architectures with the right tools and smart alerts

Sarah Wells, Principal Engineer of Financial Times told the story of what it’s like to move from monitoring a monolithic application to monitoring a microservice architecture in her Codemotion presentation named Alert overload: How to adopt a microservices architecture.

About two years ago Financial Times started working on a new project where their goal was to build a new content platform (Fast FT) with a microservices architecture and APIs. The project team also started to do DevOps at the same time, because they were building a lot of new services, and they couldn’t take the time to hand them over to a different operations team. According to Sarah, supporting their own services meant that all of the pain the operations team used to have was suddenly transferred to them when they did shoddy monitoring and alerting.

“Microservices make it worse! Microservices are an efficient device for transforming business problems into distributed transaction problems.”

It’s also important to note here, that there’s a lot of things to like about microservices as Sarah mentioned:

“I am very happy that I can reason about what I’m trying to do because I can make changes live to a very small piece of my system and roll back really easily whenever I want to. I can change the architecture and I can get rid of the old stuff much more easily than I could when I was building a monolith.”

Let’s see what was the biggest challenge the DevOps team at Financial Times faced with a microservice architecture. According to Sarah, monitoring suddenly became much harder because they had a lot more systems than before. The app they built consisted of 45 microservices. They had 3 environments (integration, test, production) and 2 VM’s for each of those services. Since they ran 20 different checks per service (for things like CPU load, disk status, functional tests, etc.) and they ran them every 5 minutes at least. They ended up with 1,500,000 checks a day, which meant that they got alerts for unlikely and transient things all the time.

“When you build a microservices architecture and something fails, you’re going to get an alert from a service that’s using it. But if you’re not clever about how you do alerts, you’re also going to get alerts from every other service that uses it, and then you get a cascade of alerts.”

One time a new developer joined Sarah’s team he couldn’t believe the number of emails they got from different monitoring services, so he started to count them. The result was over 19,000 system monitoring alerts in 50 days, 380 a day on average. Functional monitoring was also an issue since the team wanted to know when their response time was getting slow or when they logged or returned an error to anyone. Needless to say, they got swamped by the amount of alerts they got, namely 12,745 response time or error alerts in 50 days, 255 a day on average.

Monitoring a Microservices Architecture can cause trouble with Alerting

Sarah and the team finally developed three core principles for making this almost unbearable situation better.

1.Think about monitoring from the start.

The Financial Times team created far too many alerts without thinking about why they were doing it. As it turned out, it was the business functionality they really cared about, not the individual microservices - so that’s what their alerting should have focused on. At the end of the day, they only wanted an alert when they needed to take action. Otherwise, it was just noise. They made sure that the alerts are actually good because anyone reading them should be able to work out what they mean and what is needed to do.

According to Sarah’s experiences, a good alert has clear language, is not fake, and contains a link to more explanatory information. They had also developed a smart solution: they tied all of their microservices together by passing around transaction ID’s as request headers, so the team instantly knew that if an error was caused thanks by an event in the system, and they could even search for it. The team also established health checks for every RESTful application, since they wanted to know early about problems that could affect their customers.

2.Use the right tools for the job.

Since the platform Sarah’s team have been working on was an internal PaaS, they figured out that they needed some tooling to get the job done. They used different solutions for service monitoring, log aggregation, graphing, real-time error analysis, and also built some custom in-house tools for themselves. You can check out the individual tools in Sarah’s presentation from slide51.

The main takeaway from their example was that they needed tools that could show if something happened 10 minutes ago but disappeared soon after - while everyone was in a meeting. They figured out the proper communication channel for alerting: it was not email, but Slack! The team had also established a clever reaction system to tag solved and work in progress issues in Slack.

3.Cultivate your alerts

As soon as you stop paying attention to alerts, things will go wrong. When Sarah’s team gets an alert, they are reviewing it and acting on it immediately. If the alert isn’t good, they are either getting rid of it or making it better. If it isn’t helpful, they make sure that it won’t get sent again. It’s also important to make sure that alerts didn’t stop working. To check this, the team of FT often breaks things deliberately (they actually have a chaos monkey), just to make sure that alerts do fire.

How did the team benefit from these actions? They were able to turn off all emails from system monitoring and they could carry on with work while they were still able to monitor their systems. Sarah ended her presentation with a huge recommendation for using microservices and with her previously discussed pieces of advice distilled in a brief form:

“I build microservices because they are good, and I really like working with them. If you do that, you have to appreciate that you need to work at supporting them. Think about monitoring from the start, make sure you have the right tools and continue to work on your alerts as you go.”

Death Star diagrams make no sense with Microservices Architectures

Adrian Cockroft had the privilege to gain a tremendous amount of microservices related experience by working as Chief Architect for 7 years at Netflix - a company heavily relying on a microservices architecture to provide excellent user experience.

According to Adrian, teams working with microservices have to deal with three major problems right now.

“When you have microservices, you end up with a high rate of change. You do a code push and floods of new microservices appear. It’s possible to launch thousands of them in a short time, which will certainly break any monitoring solution.”

The second problem is that everything is ephemeral: Short lifetimes make it hard to aggregate historical views of services, and hand tweaked monitoring tools take too much work to keep running.

“Microservices have increasingly complex calling patterns. These patterns are hard to figure out with 800 microservices calling each other all the time. The visualization of these flows gets overwhelming, and it’s hard to render so many nodes.”

These microservice diagrams may look complicated, but looking inside a monolith would be even more confusing because it’s tangled together in ways you can’t even see. The system gets tangled together, like a big mass of spaghetti - said Adrian.

A Microservices Architecture often looks like Death Star diagrams Furthermore, managing scale is a grave challenge in the industry right now, because a single company can have tens of thousands of instances across five continents and that makes things complicated. Tooling is crucial in this area. Netflix built its own in-house monitoring tool. Twitter made its own tool too, which is called Zipkin (an open source Java monitoring tool based on Google’s Dapper technology). The problem with these tools is when teams look at the systems they have successfully mapped out, they often end up with the so-called Death Star diagrams.

“Currently, there are a bunch of tools trying to do monitoring in a small way - they can show the request flow across a few services. The problem is, that they can only visualize your own bounded context - who are your clients, who are your dependencies. That works pretty well, but once you’re getting into what’s the big picture with everything, the result will be too difficult to comprehend.”

For Adrian, it was a great frustration at Netflix that every monitoring tool they tried exploded on impact. Another problem is that using, or even testing monitoring tools at scale gets expensive very quickly. Adrian illustrated his claim with a frightening example: The single biggest budget component for Amazon is the monitoring system: it takes up 20% of the costs.

“Pretty much all of the tools you can buy now understand datacenters with a hundred nodes, that’s easy. Some of them can understand cloud. Some of them can get to a few thousand nodes. There’s a few alpha and beta monitoring solutions that claim they can get to the ten thousands. With APM’s you want to understand containers, because your containers might be coming and going in seconds - so event-driven monitoring is a big challenge for these systems.”

According to Adrian, there is still hope since the tools that are currently being built will get to the point where the large scale companies can use them as commercial products.

If you have additional thoughts on the topic, feel free to share it in the comments section.

How Enterprises Benefit From Microservices Architectures

Building a microservices architecture in an enterprise environment has tremendous benefits:

  • Microservices do not require teams to rewrite the whole application if they want to add new features.
  • Smaller codebases make maintenance easier and faster. This saves a lot of development effort and time, therefore increases overall productivity.
  • The parts of an application can be scaled separately and are easier to deploy.

After reading this article you will gain valuable insights on the best practices, benefits, and pain-points of using microservices, based on the experiences of highly innovative enterprises like Walmart, Spotify and Amazon.

Walmart Successfully Revitalized its Failing Architecture with Microservices

What can an enterprise do when its aging architecture finally begins to negatively affect business?

This is the multi-million dollar question which the IT Department of Walmart Canada had to address after they were failing to provide to their users on Black Fridays for two years in a row - according to Kevin Webber who helped to re-architect the retail giant's online business.

“It couldn’t handle 6 million pageviews per minute and made it impossible to keep any kind of positive user experience anymore.”

Before embracing microservices, Walmart had an architecture for the internet of 2005, designed around desktops, laptops and monoliths. The company decided to replatform its old legacy system in 2012 since it was unable to scale for 6 million pageviews per minute and was down for most of the day during peak events. They wanted to prepare for the world by 2020, with 4 billion people connected, 25+ million apps available, and 5.200 GB of data for each person on Earth.

Walmart replatformed to a microservices architecture with the intention of achieving close to 100% availability with reasonable costs.

Walmart prepares with a microservices architecture ready for the World by 2020

“It’s important to have a system elastic enough to scale out to handle peak without negatively impacting experience.”

Migrating to microservices caused a significant business uplift for the company:

  • conversions were up by 20% literally overnight
  • mobile orders were up by 98% instantly
  • no downtime on Black Friday or Boxing Day (The Black Friday of Canada) zero downtime since the replatforming

The operational savings were significant as well since the company moved off of its expensive hardware onto commodity hardware (cheap virtual x86 servers). They saved 40% of the computing power and experienced 20-50% cost savings overall.

“Building microservice architectures are really the key to staying in front of the demands of the market. It’s not just a sort of replatforming for the sake of technology. It’s about the overall market in general, about what users expect and what business expects to stay competitive.“

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant microservices applications using Trace
Learn more

Spotify Builds Flawless User Experience with Microservices

Kevin Goldsmith, VP of Engineering at Spotify knows from experience that an enterprise which intends to move fast and stay innovative in a highly competitive market requires an architecture that can scale.

Spotify serves 75 million active users per month, with an average session length of 23 minutes, while running incredibly complex business roles behind the scenes. They also have to watch out for their competitors, Apple and Google.

“If you’re worried about scaling to hundreds of millions of users, you build your system in a way that you scale components independently.”

Spotify is built on a microservice architecture with autonomous full-stack teams in charge in order to avoid synchronization hell within the organization.

“The problem is, if you want to build a new feature in this kind of (monolithic) world, then the client team have to ask the core team: please get us an API and let us do this. The core team asks the server team: please implement this on the server side so we can do whatever we need to do. And after that, the server team has to ask the infrastructure team for a new database. It is a lot of asking.”

Spotify has 90 teams, 600 developers, and 5 development offices on 2 continents building the same product, so they needed to minimize these dependencies as much as possible.

Spotify builds microservices architectures with full-stack DevOps teams

That’s why they build microservices with full-stack teams, each consisting of back-end developers, front-end developers, testers, a UI designer, and a product owner as well. These teams are autonomous, and their mission does not overlap with other teams mission.

“Developers deploy their services themselves and they are responsible for their own operations too. It’s great when teams have operational responsibility. If they write crummy code, and they are the ones who have to wake up every night to deal with incidents, the code will be fixed very soon.”

Spotify’s microservices are built in very loosely coupled architectures. There aren’t any strict dependencies between individual components.

Kevin mentioned the main challenges of working with microservices:

  • They are difficult to monitor since thousands of instances are running at the same time.
  • Microservices are prone to create increased latency: instead of calling a single process, Spotify is calling a lot of services, and these services are calling other services too, so the latency grows through each of these calls.

Spotify moved to a microservices architecture to move fast and stay innovative

However, building a microservice architecture has its clear benefits for enterprises according to him:

  • It’s easy to scale based on real-world bottlenecks: you can identify the bottlenecks in your services and replicate or fix them there without massive rewrites.
  • It’s way easier to test: test surface is smaller, and they don’t do that much as big monolithic applications, so developers can test services locally - without having to deploy them to a test environment.
  • It’s easier to deploy: applications are smaller, so they deploy really fast.
  • Easier monitoring (in some sense): services are doing less so it’s easier to monitor each of these instances.
  • Services can be versioned independently: there’s no need to add support for multiple versions in the same instances, so they don’t end up adding multiple versions to the same binary.
  • Microservices are less susceptible to large failures: big services fail big, small services fail small.

Building a microservices architecture allows Spotify to have a large number of services down at the same time without the users even noticing it. They’ve built their system assuming that services can fail all the time, so individual services that could be failing are not doing too much, so they can't ruin the experience of using Spotify.

Kevin Goldsmith, VP of Engineering at Spotify ended his speech with a big shoutout to those who are hesitating about embracing microservices in an enterprise environment:

“We’ve been doing microservices at Spotify for years. We do it on a pretty large scale. We do it with thousands and thousand of running instances. We have been incredibly happy with it because we have scaled stuff up. We can rewrite our services at will - which we do, rather than continue to refactor them or to add more and more technical data over time. We just rewrite them when we get to a scaling inflection point. We do this kind of stuff all the time because it’s really easy with this kind of architecture, and its working incredibly well for us. So if you are trying to convince somebody at your company, point to Spotify, point to Netflix, point to other companies and say: This is really working for them, they’re super happy with it.”

Amazon Embraced the DevOps Philosophy with Microservices and Two-Pizza Teams

Rob Birgham, senior AWS product manager shared the story of how Amazon embraced the DevOps philosophy while they migrated to a microservice infrastructure.

He began his speech with a little retrospection: in 2001, the Amazon.com retail website was a large architectural monolith. It was architected in multiple tiers, and those tiers had many components in them, but they were coupled together very tightly, and behaved like one big monolith.

“A lot of startups and enterprise projects start out this way. They take a monolith first approach, because it’s very quick, but over time, as that project matures and has more developers on it, as it grows and the codebase gets more large, and the architecture gets more complex, that monolith is going to add overhead to your process, and the software development lifecycle is going to slow down.”

How did this affect Amazon? They had a large number of developers working on one big monolithic website, and even though each one of these developers only worked on a very small piece of that application, they still needed to deal with the overhead of coordinating their changes with everyone else who was also working on the same project.

Amazon embraced microservices architecture to shorten the development lifecycle

When they were adding a new feature or making a bugfix, they needed to make sure that the change is not going to break something else on that project. If they wanted to update a shared library to take advantage of a new feature, they needed to convince everyone else on that project to upgrade to the new shared library at the same time. If they wanted to make a quick fix - to push out to their customers quickly - they couldn’t just do it on their own schedule; they had to coordinate that with all the other developers who have been processed changes at the same time.

“This lead to the existence of something like a merge Friday or a merge week - where all the developers took their changes, merged them together into one version, resolved all the conflicts, and finally created a master version that was ready to move out into production.“

Even when they had that large new version, it still added a lot of overhead to the delivery pipeline. The whole new codebase needed to be rebuilt, all of the test cases needed to be rerun, and after that they had to take the whole application and deploy it to the full production fleet.

Fun fact: In the early 2000’s Amazon even had an engineering group whose sole job was to take these new versions of the application and manually push it across Amazon's production environment.

It was frustrating for the software engineers, and most importantly, it was slowing down the software development lifecycle, the ability to innovate, so they made architectural and organizational changes - big ones.

Amazon makes 50 million deployments a year thanks to DevOps and Microservices architecture

These big changes began on an architectural level: Amazon went through its monolithic application and teased it apart into a Service Oriented Architecture.

“We went through the code and pulled out functional units that served a single purpose and wrapped those with a web service interface. We then established a rule, that from now on, they can only talk to each other through their web service APIs.”

This enabled Amazon to create a highly decoupled architecture, where these services could iterate independently from each other without any coordination between those services as long as they adhered to that standard web service interface.

“Back then it didn’t have a name, but now we call it as a microservice architecture.”

Amazon also implemented changes in how their organization operated. They broke down their one, central, hierarchical product development team into small, “two-pizza teams”.

“We originally wanted teams so small that we could feed them with just two pizzas. In reality, it’s 6-8 developers per team right now.”

Each of these teams were given full ownership of one or a few microservices. And by full ownership they mean everything at Amazon: They are talking to the customers (internal or external), they are defining their own feature roadmap, designing their features, implementing their features, then test them, deploy them and operate them.

If anything goes wrong anywhere in that full lifecycle, these two-pizza teams are the ones accountable for fixing it. If they choose to skimp on their testing and are unknowingly releasing bad changes into production, the same engineers have to wake up and fix the service in the middle of the night.

This organizational restructuring properly aligned incentives, so engineering teams are now fully motivated to make sure the entire end-to-end lifecycle operates efficiently.

“We didn’t have this term back then, but now we call it a DevOps organization. We took the responsibilities of development, test, and operations, and merged those all into a single engineering team.”

After all these changes were made, Amazon dramatically improved its front-end development lifecycle. Now the product teams can quickly make decisions and crank out new features for their microservices. The company makes 50 million deployments a year, thanks to the microservice architecture and their continuous delivery processes.

“How can others do this? There is not one right answer for every company. A company needs to look at cultural changes, organizational changes, and process changes. Also, there is one common building block that every DevOps transformation needs: That is to have an efficient and reliable continuous delivery pipeline.”

Every technology has a downside. If we consider microservices on an organization level, the negative trade-off is clearly the increase in the complexity of operations. There is no way a human can ultimately map how all of the services are talking to each other, so companies need tools to grant the visibility of their microservice infrastructure.

At RisingStack, our enterprise microservice development and consulting experience inspired us to create a monitoring tool called Trace, which allows engineers to successfully tackle the most common challenges during the full lifecycle of microservices: transaction tracking, anomaly detection, service topology and performance monitoring.

Get started for free

Do you have additional insights on the topic? Share it in the comments.

How Enterprises Benefit from Node.js

"I’m making the bold claim: To every organization, Node.js is absolutely essential." - Scott Rahner, Engineering Productivity Lead of Dow Jones.

Using Node.js in an enterprise setting has many well-known advantages:

  • It makes development faster and increases the productivity of teams, thanks to the NPM which has more than 230.000 modules that can be used instantly.
  • The high-scalability of Node lets you spend less on infrastructure, since you can handle the same amount of load with less hardware.
  • A well-established Long Term Support plan ensures that each release is going to be maintained for 30 months.

But when we are saying Node.js is enterprise ready, we don’t just talk about advantages in theory. We have summarized what leading developers say about using Node.js in an enterprise environment, why they chose it and how the technology improved their teams and products.

Download the Full Report: Node.js is Enterprise Ready

Dow Jones Uses Node.js from the Start

Developers at Dow Jones were already huge JavaScript enthusiasts back in 2010 and started playing around with Node as soon as they could - according to Scott Rahner’s NodeSummit keynote.

The developer team at Dow Jones used Node.js in production for the first time in 2011 with "Wall Street Journal Social", an experimental Facebook reader application. Node met all their expectations, since application performance was great and active development took only a few weeks.

Dow Jones Node.js Enterprise Adoption Timeline

The success of Wall Street Journal Social with Node got the whole engineering team excited at Dow Jones, but it was more like an experimental project. The first premium Node project, "Wall Street Journal Real Time" - a news feed app came a year later. They experienced the same success, again.

The newly appointed CTO at Dow Jones was very enthusiastic about Node. He had first-hand experience on how it benefited the company, therefore pushed management to support it, and soon announced that Node will be the primary technology at Dow Jones.

Thanks to the standardization of the development processes, great management decisions and internal Node.js evangelists, they were able to scale a large organization to use Node.js. They could even re-educate over 100 .NET developers to Node.js in a short amount of time.

Today, most of the products - especially on the consumer side - are 100% Node.js-based at Dow Jones.

"When you think about JavaScript, there’s never been a technology like this. Something you can deploy to every single platform, doesn’t matter if it’s Linux, Windows, Heroku, AWS, DigitalOcean, etc.. It’s more universally known by engineers than any other language, hands down. Obviously meets the performance profile of all of today's applications. It fits perfectly. - Scott Rahner"

Uber Runs on Node.js

Tom Croucher let us peek under the hood of Uber at his latest NodeConf talk in December 2015.

Enterprise Node.js Adoption at Uber Then and Now

"The thing that I like best about Node is the amount of power that I've personally found it gives me. The ease with which I can do things with Node have amplified the power that I have as a developer."

Most of Uber - for the first 5 billion dollars of valuation - was built using Node 0.8. Then they moved to Node 0.10 in six months. Node 0.10 is super stable everywhere and works well according to Tom, but they clearly see the benefit of jumping to a newer version.

Uber owes a lot to Node.js:

"The heart of the 15 billion dollar business is written as server-side Node, as API-s, as reliable distributed systems with queuing and replication and geospatial databases written in Node."

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant Node.js applications using Trace
Learn more

High Speed, High Volume for GoDaddy

Stephen Comissio, senior (ex) .NET developer told the story of how GoDaddy migrated to Node.js and how it benefited the company.

A few years ago GoDaddy mainly employed Java and .NET developers and developers with 10+ years of monolithic stack experience. They saw that it wasn’t the future for an agile company, so they decided to start an enterprise level culture shift and began prototyping Node.js applications in 2013.

GoDaddy's front end already relied on JavaScript and single-page applications by that time, but the backend ran on a .NET stack. The company - to increase hosting capabilities - has revamped its entire backend to a Node.js-based infrastructure.

But why have they chosen Node?

"Node allows you to easily build applications with high confidence in build quality. Unit testing is easier. Integration testing is easier. REST is easier. Deployments are easier."

During the "Puppet Master" SuperBowl ad in 2014, they faced one of their biggest scaling challenges so far. Their spot - broadcasted to more than 100 million people urged it’s viewers to visit a website - made by one of their customer with their Website Builder app.

GoDaddy handles SuperBowl traffic with Node.js

At the time, GoDaddy’s infrastructure handled 13.000 rps’s with ~87ms TTFB (Time To First Byte) on an average day, but now they had to think bigger. They estimated that the website alone will have to handle 10.000 requests per second. To support this amount of traffic, the site had to be manually migrated to its own cluster, consisting of 12 servers, but they succeeded.

"We can handle the same load with only 10% of the hardware now."

According to Stephen, GoDaddy uses Node.js because they can handle the same load with only 10% of the hardware than before. Fewer servers need to be managed, and they are not forced to build out new servers at the previous rate. They are serving 1.7 million request per months and survive DDOS attacks with basically zero impact using Node.js from day to day.

Adopting Node.js has its' advantages from a talent acquisition point of view as well.

"It's hard to find top talent in the next generation of developers who want to work with statically typed languages like #C or Java. If you look at the momentum behind the Node, you will see the growth of the platform, the increasing number of downloads, high number of enterprise adoptions and the largest growth for startups. "

PayPal has Increased Productivity with Node.js

"Node.js and an all Javascript development stack helped PayPal bring efficiencies in engineering and helped rethink and reboot product, design and operational thinking." - Sameera Rao, Sr. Business Products Engineering Manager

Sameera worked at a startup which was familiar with microservices and Node.js before he joined PayPal in 2012, an experience he described as going back in time. PayPal's architecture was monolithic, so one app had everything: UI, controllers, and cohesive call to the API for all the operations.

Enterprise Node.js at PayPal: before and after

There was a lot of duplications: teams copy-pasted code and made tweaks needed for a particular country, then rolled out another application. From an engineering point of view, it was like an assembly line.

If they wanted to customize something for a localized version of PayPal - given that there was no foundation to build on - everything went to the core team’s backlog as something to work on.

"It was a prioritisation exercise when not much got done. We wanted to build a foundation on which teams can work on, and it all came together and just works."

What PayPal did:

  • They moved from the old architecture and mindset to a new, service-oriented one.
  • Scaled Node.js and Kraken.js for a global organization - with multiple teams working on the same project.
  • They incorporated an open-source model, where anyone can submit a pull request to the core Github repository as long as the guidelines are met.

"What Node gave for us is an ability to modularize every piece of the stack. Global teams were able to roll out experiences in a much quicker way."

Netflix & Node.js

Kim Trott, director of UI platform engineering, told the story of Node.js at Netflix at the latest NodeSummit in Portland.

The tale started in 2013 when they haven’t used Node.js in production at all. They were running a monolithic application, a large legacy application with 40 minute startup times, slow builds and huge developer machines.

It affected their ability to be productive, to move quickly and to be able to innovate rapidly. They weren’t able to build out A/B testing effectively enough - which is crucial, since Netflix is constantly doing 100’s of A/B tests simultaneously.

They were running Java on the server and JavaScript on the client. Their developers had to be great in a lot of things at the same time: care about the amazing product experience and deal with a lot of backend and middle tier aspects.

"We’ve been doing a lot of things twice. Pretty much had to write everything twice - once for the server and once for the client."

They had two ways of debugging, data access and rendering, so it was difficult to work in that environment. They hired and trained a lot of people to be great with all of that - but it wasn't working. They didn't have the developer productivity they wanted, and they weren’t moving at the pace of innovation needed to keep up with the business.

So they decided to simplify their stack, since their complex webapp layer did way too much: it had a lot of business logic, it was doing a lot of data access and was directly talking to 100’s of middle tier services. They simply wanted to turn it into a single responsibility rendering layer, where they only have to worry about routing, view templates and sending data to those templates. They also wanted to move the website to a single page application instead of rendering each page fully like they had done before.

Netflix choose Node.js because they wanted a common language to write the same code. Write it once, run it everywhere.

They didn’t want developers to do the constant context switching all the time - between Java and JavaScript, client and server side. They wanted the universal JavaScript aspect that they could get by running the same language on the server and the client.

Node.js Enterprise Adoption at Netflix

Now Netflix is running more as a single page application, and with a rich user experience. They had to unlearn Java instincts and really understand and learn about the characteristics of Node and how its different.

"Instead of worrying about tuning the VM, we focused more on tuning the application and looking for where you're spending too much time on CPU, and finding CPU bottlenecks. Big challenge was memory leaks in production and learning how to root cause and find where those leaks are coming from."

Now Node.js is used on the entire website, but the rest of their clients (mobile, tv apps) are not necessarily using Node. More than 30% of the Netflix team is working on Node in production.

"You can go from 0 to 60 with Node really fast, so you can get something going really quickly."

Node.js fits the enterprise world and can be adopted successfully with great benefits, but it has its’ challenges as well. To reach the full potential of developing with Node.js, there are crucial points that must be addressed at an organization level.

If you’d like to know more on adopting Node.js and overcome common challenges read our detailed report on the topic.

Download the Full Report: Node.js is Enterprise Ready

Do you have additional insights on the topic? Share it in the comments.