Expert Node.js Support
Learn more

nodejs

Announcing Free Node.js Monitoring & Debugging with Trace

Announcing Free Node.js Monitoring & Debugging with Trace

Today, we’re excited to announce that Trace, our Node.js monitoring & debugging tool is now free for open-source projects.

What is Trace?

We launched Trace a year ago with the intention of helping developers looking for a Node.js specific APM which is easy to use and helps with the most difficult aspects of building Node projects, like..

  • finding memory leaks in a production environment
  • profiling CPU usage to find bottlenecks
  • tracing distributed call-chains
  • avoiding security leaks & bad npm packages

.. and so on.

Node.js Monitoring with Trace by RisingStack - Performance Metrics chart

Why are we giving it away for free?

We use a ton of open-source technology every day, and we are also the maintainers of some.

We know from experience that developing an open-source project is hard work, which requires a lot of knowledge and persistence.

Trace will save a lot of time for those who use Node for their open-source projects.

How to get started with Trace?

  1. Visit trace.risingstack.com and sign up - it's free.
  2. Connect your app with Trace.
  3. Head over to this form and tell us a little bit about your project.

Done. Your open-source project will be monitored for free as a result.

If you need help with Node.js Monitoring & Debugging..

Just drop us a tweet at @RisingStack if you have any additional questions about the tool or the process.

If you'd like to read a little bit more about the topic, I recommend to read our previous article The Definitive Guide for Monitoring Node.js Applications.

One more thing

At the same time of making Trace available for open-source projects, we're announcing our new line of business at RisingStack:

Commercial Node.js support, aimed at enterprises with Node.js applications running in a production environment.

RisingStack now helps to bootstrap and operate Node.js apps - no matter what life cycle they are in.


Disclaimer: We retain the exclusive right to accept or deny your application to use Trace by RisingStack for free.

Digital Transformation with the Node.js Stack

Digital Transformation with the Node.js Stack

In this article, we explore the 9 main areas of digital transformation and show what are the benefits of implementing Node.js. At the end, we’ll lay out a Digital Transformation Roadmap to help you get started with this process.

Note, that implementing Node.js is not the goal of a digital transformation project - it is just a great tool that opens up possibilities that any organization can take advantage of.


Digital transformation is achieved by using modern technology to radically improve the performance of business processes, applications, or even whole enterprises.

One of the available technologies that enable companies to go through a major performance shift is Node.js and its ecosystem. It is a tool that grants improvement opportunities that organizations should take advantage of:

  • Increased developer productivity,
  • DevOps or NoOps practices,
  • and shipping software to production in brief time using the proxy approach,

just to mention a few.

The 9 Areas of Digital Transformation

Digital Transformation projects can improve a company in nine main areas. The following elements were identified as a result of an MIT research on digital transformation, where they interviewed 157 executives from 50 companies (typically $1 billion or more in annual sales).

#1. Understanding your Customers Better

Companies are heavily investing in systems to understand specific market segments and geographies better. They have to figure out what leads to customer happiness and customer dissatisfaction.

Many enterprises are building analytics capabilities to better understand their customers. Information derived this way can be used for data-driven decisions.

#2. Achieving Top-Line Growth

Digital transformation can also be used to enhance in-person sales conversations. Instead of paper-based presentations or slides, salespeople can use great looking, interactive presentations, like tablet-based presentations.

Understanding customers better helps enterprises to transform and improve the sales experience with more personalized sales and customer service.

#3. Building Better Customer Touch Points

Customer service can be improved tremendously with new digital services. For example, by introducing new channels for the communication. Instead of going to a local branch of a business, customers can talk to support through Twitter of Facebook.

Self-service digital tools can be developed which both save time for the customer while saving money for the company.

#4. Process Digitization

With automation companies can focus their employees on more strategic tasks, innovation or creativity rather than repetitive efforts.

#5. Worker Enablement

Virtualization of individual work (the work process is separated from the location of work) have become enablers for knowledge sharing. Information and expertise is accessible in real-time for frontline employees.

#6. Data-Driven Performance Management

With the proper analytical capabilities, decisions can be made on real data and not on assumptions.

Digital transformation is changing the way how strategic decisions are made. With new tools strategic planning sessions can include more stakeholders, not just a small group.

#7. Digitally Extended Businesses

Many companies extend their physical offerings with digital ones. Examples include:

  • news outlets augmenting their print offering with digital content,
  • FMCG companies extending to e-commerce.

#8. New Digital Businesses

Companies not just extend their current offerings with digital transformation, but also coming up with new digital products that complement the traditional ones. Examples may include connected devices, like GPS trackers that can now report activity to the cloud and provide value to the customers through recommendations.

#9. Digital Globalization

Global shared services, like shared finance or HR enable organizations to build truly global operations.


The Digital Transformation Benefits of Implementing Node.js

Organizations are looking for the most effective way of digital transformation - among these companies, Node.js is becoming the de facto technology for building out digital capabilities.

Node.js is a JavaScript runtime built on Chrome's V8 JavaScript engine. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.

In other words: Node.js offers you the possibility to write servers using JavaScript with an incredible performance.

Increased Developer Productivity with Same-Language Stacks

When PayPal started using Node.js, they reported an 2x increase in productivity compared to the previous Java stack. How is that even possible?

NPM, the Node.js package manager, has an incredible amount of modules that can be used instantly. This saves a lot of development effort for the development team.

Secondly, as Node.js applications are written using JavaScript, front-end developers can also easily understand what's going on and make necessary changes.

This saves you valuable time again as developers will use the same language on the entire stack.

100% Business Availability, even with Extreme Load

Around 1.5 billion dollars are being spent online in the US on a single day on Black Friday, each year.

It is crucial that your site can keep up with the traffic. This is why Walmart, one of the biggest retailers is using Node.js to serve 500 million pageviews on Black Friday, without a hitch.

Fast Apps = Satisfied Customers

As your velocity increases because of the productivity gains, you can ship features/products sooner. Products that run faster result in better user experience.

Kissmetric's study showed that 40% of people abandon a website that takes more than 3 seconds to load, and 47% of consumers expect a web page to load in 2 seconds or less.

To read more on the benefits of using Node.js, you can download our Node.js is Enterprise Ready ebook.


Your Digital Transformation Roadmap with Node.js

As with most new technologies introduced to a company, it’s worth taking baby-steps first with Node.js as well. As a short framework for introducing Node.js, we recommend the following steps:

  • building a Node.js core team,
  • picking a small part of the application to be rewritten/extended using Node.js,
  • extending the scope of the project to the whole organization.

Step 1 - Building your Node.js Core Team

The core Node.js team will consist of people with JavaScript experience for both the backend and the frontend. It’s not crucial that the backend engineers have any Node.js experience, the important aspect is the vision they bring to the team.

Introducing Node.js is not just about JavaScript - it has to include members of the operations team as well, joining the core team.

The introduction of Node.js to an organization does not stop at excelling Node.js - it also means adding modern DevOps or NoOps practices, including but not limited to continuous integration and delivery.

Step 2 - Embracing The Proxy Approach

To incrementally replace old systems or to extend their functionality easily, your team can use the proxy approach.

For the features or functions you want to replace, create a small and simple Node.js application and proxy some of your load to the newly built Node.js application. This proxy does not necessarily have to be written in Node.js. With this approach, you can easily benefit from modularized, service-oriented architecture.

Another way to use proxies is to write them in Node.js and make them to talk with the legacy systems. This way you have the option to optimize the data sent being sent. PayPal was one of the first adopter of Node.js at scale, and they started with this proxy approach as well.

The biggest advantages of these solutions are that you can put Node.js into production in a short amount of time, measure your results, and learn from them.

Step 3 - Measure Node.js, Be Data-Driven

For the successful introduction of Node.js during a digital transformation project, it is crucial to set up a series of benchmarks to compare the results between the legacy system and the new Node.js applications. These data points can be response times, throughput or memory and CPU usage.

Orchestrating The Node.js Stack

As mentioned previously, introducing Node.js does not stop at excelling Node.js itself, but introducing continuous integration and delivery are crucial points as well.

Also, from an operations point of view, it is important to add containers to ship applications with confidence.

For orchestration, to operate the containers containing the Node.js applications we encourage companies to adopt Kubernetes, an open-source system for automating deployment, scaling, and management of containerized applications.

RisingStack and Digital Transformation with Node.js

RisingStack enables amazing companies to succeed with Node.js and related technologies to stay ahead of the competition. We provide professional Node.js development and consulting services from the early days of Node.js, and help companies like Lufthansa or Cisco to thrive with this technology.

Node Hero - Node.js Authentication using Passport.js

Node Hero - Node.js Authentication using Passport.js

This is the 8th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In this tutorial, you are going to learn how to implement a local Node.js authentication strategy using Passport.js and Redis.

Technologies to use

Before jumping into the actual coding, let's take a look at the new technologies we are going to use in this chapter.

What is Passport.js?

Simple, unobtrusive authentication for Node.js - http://passportjs.org/

Passport.js is an authentication middleware for Node.js

Passport is an authentication middleware for Node.js which we are going to use for session management.

What is Redis?

Redis is an open source (BSD licensed), in-memory data structure store, used as database, cache and message broker. - https://redis.io/

We are going to store our user's session information in Redis, and not in the process's memory. This way our application will be a lot easier to scale.

The Demo Application

For demonstration purposes, let’s build an application that does only the following:

  • exposes a login form,
  • exposes two protected pages:
    • a profile page,
    • secured notes

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

The Project Structure

You have already learned how to structure Node.js projects in the previous chapter of Node Hero, so let's use that knowledge!

We are going to use the following structure:

├── app
|   ├── authentication
|   ├── note
|   ├── user
|   ├── index.js
|   └── layout.hbs
├── config
|   └── index.js
├── index.js
└── package.json

As you can see we will organize files and directories around features. We will have a user page, a note page, and some authentication related functionality.

(Download the full source code at https://github.com/RisingStack/nodehero-authentication)

The Node.js Authentication Flow

Our goal is to implement the following authentication flow into our application:

  1. User enters username and password
  2. The application checks if they are matching
  3. If they are matching, it sends a Set-Cookie header that will be used to authenticate further pages
  4. When the user visits pages from the same domain, the previously set cookie will be added to all the requests
  5. Authenticate restricted pages with this cookie

To set up an authentication strategy like this, follow these three steps:

Step 1: Setting up Express

We are going to use Express for the server framework - you can learn more on the topic by reading our Express tutorial.

// file:app/index.js
const express = require('express')  
const passport = require('passport')  
const session = require('express-session')  
const RedisStore = require('connect-redis')(session)

const app = express()  
app.use(session({  
  store: new RedisStore({
    url: config.redisStore.url
  }),
  secret: config.redisStore.secret,
  resave: false,
  saveUninitialized: false
}))
app.use(passport.initialize())  
app.use(passport.session())  

What did we do here?

First of all, we required all the dependencies that the session management needs. After that we have created a new instance from the express-session module, which will store our sessions.

For the backing store, we are using Redis, but you can use any other, like MySQL or MongoDB.

Step 2: Setting up Passport for Node.js

Passport is a great example of a library using plugins. For this tutorial, we are adding the passport-local module which enables easy integration of a simple local authentication strategy using usernames and passwords.

For the sake of simplicity, in this example, we are not using a second backing store, but only an in-memory user instance. In real life applications, the findUser would look up a user in a database.

// file:app/authenticate/init.js
const passport = require('passport')  
const LocalStrategy = require('passport-local').Strategy

const user = {  
  username: 'test-user',
  password: 'test-password',
  id: 1
}

passport.use(new LocalStrategy(  
  function(username, password, done) {
    findUser(username, function (err, user) {
      if (err) {
        return done(err)
      }
      if (!user) {
        return done(null, false)
      }
      if (password !== user.password  ) {
        return done(null, false)
      }
      return done(null, user)
    })
  }
))

Once the findUser returns with our user object the only thing left is to compare the user-fed and the real password to see if there is a match.

If it is a match, we let the user in (by returning the user to passport - return done(null, user)), if not we return an unauthorized error (by returning nothing to passport - return done(null)).

Step 3: Adding Protected Endpoints

To add protected endpoints, we are leveraging the middleware pattern Express uses. For that, let's create the authentication middleware first:

// file:app/authentication/middleware.js
function authenticationMiddleware () {  
  return function (req, res, next) {
    if (req.isAuthenticated()) {
      return next()
    }
    res.redirect('/')
  }
}

It only has one role if the user is authenticated (has the right cookies) it simply calls the next middleware; otherwise it redirects to the page where the user can log in.

Using it is as easy as adding a new middleware to the route definition.

// file:app/user/init.js
const passport = require('passport')

app.get('/profile', passport.authenticationMiddleware(), renderProfile)  

Summary

"Setting up authentication for Node.js with Passport is a piece of cake!” via @RisingStack #nodejs

Click To Tweet

In this Node.js tutorial, you have learned how to add basic authentication to your application. Later on, you can extend it with different authentication strategies, like Facebook or Twitter. You can find more strategies at http://passportjs.org/.

The full, working example is on GitHub, you can take a look here: https://github.com/RisingStack/nodehero-authentication

Download the whole Node Hero series as a single pdf

Next up

The next chapter of Node Hero will be all about unit testing Node.js applications. You will learn concepts like unit testing, test pyramid, test doubles and a lot more!

Share your questions and feedbacks in the comment section.


Introducing Distributed Tracing for Microservices Monitoring

Introducing Distributed Tracing for Microservices Monitoring

At RisingStack, as an enterprise Node.js development and consulting company, we have been working tirelessly in the past two years to build durable and efficient microservices architectures for our clients and as being passionate advocates of this technology.

During this period, we had to face the cold fact that there aren’t proper tools able to support microservices architectures and the developers working with them. Monitoring, debugging and maintaining distributed systems is still extremely challenging.

We want to change this because doing microservices shouldn’t be so hard.

I am proud to announce that Trace - our microservices monitoring tool has entered the Open Beta stage and is available to use for free with Node.js services from now on.

Trace provides:

  • A Distributed Trace view for all of your transactions with error details
  • Service Map to see the communication between your microservices
  • Metrics on CPU, memory, RPM, response time, event loop and garbage collection
  • Alerting with Slack, Pagerduty, and Webhook integration

Trace makes application-level transparency available on a large microservices system with very low overhead. It will also help you to localize production issues faster to debug and monitor applications with ease.

You can use Trace in any IaaS or PaaS environment, including Amazon AWS, Heroku or DigitalOcean. Our solution currently supports Node.js only, but it will be available for other languages later as well. The open beta program lasts until 1 July.

Get started with Trace for free

Read along to get details on the individual features and on how Trace works.

Distributed Tracing

The most important feature of Trace is the transaction view. By using this tool, you can visualize every transaction going through your infrastructure on a timeline - in a very detailed way.

Distributed Tracing View Trace by Risingstack

By attaching a correlation ID to certain requests, Trace groups services taking part in a transaction and visualizes the exact data-flow on a simple tree-graph. Thanks to this you can see the distributed call stacks and the dependencies between your microservices and see where a request takes the most time.

This approach also lets you to localize ongoing issues and show them on the graph. Trace provides detailed feedback on what caused an error in a transaction and gives you enough data to start debugging your system instantly.

Distributed Tracing with Detailed Error Message

When a service causes an error in a distributed system, usually all of the services taking part in that transaction will throw an error, and it is hard to figure out which one really caused the trouble in the first place. From now on, you won’t need to dig through log files to find the answer.

With Trace, you can instantly see what was the path of a certain request, what services were involved, and what caused the error in your system.

The technology Trace uses is primarily based on Google’s Dapper whitepaper. Read the whole study to get the exact details.

Microservices Topology

Trace automatically generates a dynamic service map based on how your services communicate with each other or with databases and external APIs. In this view, we provide feedback on infrastructure health as well, so you will get informed when something begins to slow down or when a service starts to handle an increased amount of requests.

Distributed Tracing with Service Topology Map

The service topology view also allows you to immediately get a sense of how many requests your microservices handle in a given period and how big are their response times.

By getting this information you can see how your application looks like and understand the behavior of your microservices architecture.

Metrics and Alerting

Trace provides critical metrics data for each of your monitored services. Other than basics like CPU usage, memory usage, throughput and response time, our tool reports event loop and garbage collection metrics as well to make microservices development and operations easier.

Distributed Tracing with Metrics and Alerting

You can create alerts and get notified when a metric passes warning or error thresholds so you can act immediately. Trace will alert you via Slack, Pagerduty, Email or Webhook.

Give Microservices Monitoring a Try

Adding Trace to your services is possible with just a couple lines of code, and it can be installed and used in under two minutes.

Click to sign up for Trace

We are curious on your feedback on Trace and on the concept of distributed transaction tracking, so don’t hesitate to express your opinion in the comment section.

Node Hero - Node.js Request Module Tutorial

This is the 6th part of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the following tutorial, you will learn the basics of HTTP, and how you can fetch resources from external sources using the Node.js request module.

What's HTTP?

HTTP stands for Hypertext Transfer Protocol. HTTP functions as a request–response protocol in the client–server computing model.

HTTP Status Codes

Before diving into the communication with other APIs, let's review the HTTP status codes we may encounter during the process. They describe the outcome of our requests and are essential for error handling.

  • 1xx - Informational

  • 2xx - Success: These status codes indicate that our request was received and processed correctly. The most common success codes are 200 OK, 201 Created and 204 No Content.

  • 3xx - Redirection: This group shows that the client had to do an additional action to complete the request. The most common redirection codes are 301 Moved Permanently, 304 Not Modified.

  • 4xx - Client Error: This class of status codes is used when the request sent by the client was faulty in some way. The server response usually contains the explanation of the error. The most common client error codes are 400 Bad Request, 401 Unauthorized, 403 Forbidden, 404 Not Found, 409 Conflict. ​

  • 5xx - Server Error: These codes are sent when the server failed to fulfill a valid request due to some error. The cause may be a bug in the code or some temporary or permanent incapability. The most common server error codes are 500 Internal Server Error, 503 Service Unavailable. ​ If you'd like to learn more about HTTP status codes, you can find a detailed explanation about them here.

Sending Requests to External APIs

Connecting to external APIs is easy in Node. You can just require the core HTTP module and start sending requests.

Of course, there are much better ways to call an external endpoint. On NPM you can find multiple modules that can make this process easier for you. Fo​r example, the two most popular ones are the request and superagent modules.

Both of these modules have an error-first callback interface that can lead to some issues (I bet you've heard about Callback-Hell), but luckily we have access to the promise-wrapped versions.

Node.js Monitoring and Debugging from the Experts of RisingStack

See the communication between your services
Learn more

Using the Node.js Request Module

Using the request-promise module is simple. After installing it from NPM, you just have to require it:

const request = require('request-promise')  

​ Sending a GET request is as simple as:

const options = {  
  method: 'GET',
  uri: 'https://risingstack.com'
}
​
request(options)  
  .then(function (response) {
    // Request was successful, use the response object at will
  })
  .catch(function (err) {
    // Something bad happened, handle the error
  })

If you are calling a JSON API, you may want the request-promise to parse the response automatically. In this case, just add this to the request options:

json: true  

​ POST requests work in a similar way:

const options = {  
  method: 'POST',
  uri: 'https://risingstack.com/login',
  body: {
    foo: 'bar'
  },
  json: true 
    // JSON stringifies the body automatically
}
​
request(options)  
  .then(function (response) {
    // Handle the response
  })
  .catch(function (err) {
    // Deal with the error
  })

​ To add query string parameters you just have to add the qs property to the options object:

const options = {  
  method: 'GET',
  uri: 'https://risingstack.com',
  qs: {
    limit: 10,
    skip: 20,
    sort: 'asc'
  }
}

This will make your request URL: https://risingstack.com?limit=10&skip=20&sort=asc.
​ You can also define any header the same way we added the query parameters:

const options = {  
  method: 'GET',
  uri: 'https://risingstack.com',
  headers: {
    'User-Agent': 'Request-Promise',
    'Authorization': 'Basic QWxhZGRpbjpPcGVuU2VzYW1l'
  }
}

Error handling

Error handling is an essential part of making requests to external APIs, as we can never be sure what will happen to them. Apart from our client errors the server may respond with an error or just send data in a wrong or inconsistent format. Keep these in mind when you try handling the response. Also, using catch for every request is a good way to avoid the external service crashing our server.

Putting it together

As you have already learned how to spin up a Node.js HTTP server, how to render HTML pages, and how to get data from external APIs, it is time to put them together!

In this example, we are going to create a small Express application that can render the current weather conditions based on​ city names.

(To get your AccuWeather​ API key, please visit their developer site)

const express = require('express')  
const rp = require('request-promise')  
const exphbs = require('express-handlebars')

const app = express()

app.engine('.hbs', exphbs({  
  defaultLayout: 'main',
  extname: '.hbs',
  layoutsDir: path.join(__dirname, 'views/layouts')
}))
app.set('view engine', '.hbs')  
app.set('views', path.join(__dirname, 'views'))

app.get('/:city', (req, res) => {  
  rp({
    uri: 'http://apidev.accuweather.com/locations/v1/search',
    qs: {
      q: req.params.city,
      apiKey: 'api-key'
         // Use your accuweather API key here
    },
    json: true
  })
    .then((data) => {
      res.render('index', data)
    })
    .catch((err) => {
      console.log(err)
      res.render('error')
    })
})

app.listen(3000)  

The example above does the following:

  • creates an Express server
  • sets up the handlebars structure - for the .hbs file please refer to the Node.js HTTP tutorial
  • sends a request to the external API
    • if everything is ok, it renders the page
    • otherwise, it shows the error page and logs the error

Download the whole Node Hero series as a single pdf

Next up

In the next chapter of Node Hero you are going to learn how to structure your Node.js projects correctly.

In the meantime try out integrating with different API providers and if you run into problems or questions, don't hesitate to share them in the comment section!


What's new in Node v6?

What's new in Node v6?

New “Current” version line focuses on performance improvements, increased reliability and better security for its 3.5 million users - https://nodejs.org/en/blog/announcements/v6-release/

Node.js v6 just got released - let's take a look at the improvements and new features landed in this release.

Node v6 just got released - find out what's new in it

Performance Improvements

Security Improvements

New ES6 Features

Default function parameters
function multiply(a, b = 1) {  
  return a * b
}

multiply(5) // 5  

Learn more on the default function parameters.

Rest parameters
function fun1(...theArgs) {  
  console.log(theArgs.length)
}

fun1()  // 0  
fun1(5) // 1  
fun1(5, 6, 7) // 3  

Learn more on the rest parameters.

Spread operator
// before the spread operator
function myFunction(x, y, z) { }  
var args = [0, 1, 2]  
myFunction.apply(null, args)

// with the spread operator
function myFunction(x, y, z) { }  
var args = [0, 1, 2]  
myFunction(...args)  

Learn more on the spread operator.

Destructuring
var x = [1, 2, 3, 4, 5]  
var [y, z] = x  
console.log(y) // 1  
console.log(z) // 2  

Learn more on destructuring.

new.target
function Foo() {  
  if (!new.target) throw new Error('Foo() must be called with new')
  console.log('Foo instantiated with new')
}

Foo() // throws "Foo() must be called with new"  
new Foo() // logs "Foo instantiated with new"  

Learn more on the new.target.

Proxy

The Proxy object is used to define custom behavior for fundamental operations.

var handler = {  
    get: function(target, name){
        return name in target ? target[name] : 37
    }
};

var p = new Proxy({}, handler)  
p.a = 1  
p.b = undefined

console.log(p.a, p.b) // 1, undefined  
console.log('c' in p, p.c) // false, 37  

Learn more on proxies.

Reflect

Reflect is a built-in object that provides methods for interceptable JavaScript operations.

Learn more on reflect.

Symbols

A symbol is a unique and immutable data type and may be used as an identifier for object properties.

Symbol("foo") === Symbol("foo"); // false  

Learn more on symbols.

Trying Node.js v6

If you are using nvm, you only have to:

nvm install 6  

If you for some reason cannot use nvm, you can download the binary from the official Node.js website.

When should I migrate to Node 6?

In October Node.js v6 will become the LTS version - after that, you should make the change.

Node Hero - Node.js Database Tutorial

Node Hero - Node.js Database Tutorial

This is the 5th post of the tutorial series called Node Hero - in these chapters, you can learn how to get started with Node.js and deliver software products using it.

In the following Node.js database tutorial, I’ll show you how you can set up a Node.js application with a database, and teach you the basics of using it.

Storing data in a global variable

Serving static pages for users - as you have learned it in the previous chapter - can be suitable for landing pages, or for personal blogs. However, if you want to deliver personalized content you have to store the data somewhere.

Let’s take a simple example: user signup. You can serve customized content for individual users or make it available for them after identification only.

If a user wants to sign up for your application, you might want to create a route handler to make it possible:

const users = []

app.post('/users', function (req, res) {  
    // retrieve user posted data from the body
    const user = req.body
    users.push({
      name: user.name,
      age: user.age
    })
    res.send('successfully registered')
})

This way you can store the users in a global variable, which will reside in memory for the lifetime of your application.

Using this method might be problematic for several reasons:

  • RAM is expensive,
  • memory resets each time you restart your application,
  • if you don't clean up, sometimes you'll end up with stack overflow.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Storing data in a file

The next thing that might come up in your mind is to store the data in files.

If we store our user data permanently on the file system, we can avoid the previously listed problems.

This method looks like the following in practice:

const fs = require('fs')

app.post('/users', function (req, res) {  
    const user = req.body
    fs.appendFile('users.txt', JSON.stringify({ name: user.name, age: user.age }), (err) => {
        res.send('successfully registered')
    })
})

This way we won’t lose user data, not even after a server reset. This solution is also cost efficient, since buying storage is cheaper than buying RAM.

Unfortunately storing user data this way still has a couple of flaws:

  • Appending is okay, but think about updating or deleting.
  • If we're working with files, there is no easy way to access them in parallel (system-wide locks will prevent you from writing).
  • When we try to scale our application up, we cannot split files (you can, but it is way beyond the level of this tutorial) in between servers.

This is where real databases come into play.

You might have already heard that there are two main kinds of databases: SQL and NoSQL.

SQL

Let's start with SQL. It is a query language designed to work with relational databases. SQL has a couple of flavors depending on the product you're using, but the fundamentals are same in each of them.

The data itself will be stored in tables, and each inserted piece will be represented as a row in the table, just like in Google Sheets, or Microsoft Excel.

Within an SQL database, you can define schemas - these schemas will provide a skeleton for the data you'll put in there. The types of the different values have to be set before you can store your data. For example, you'll have to define a table for your user data, and have to tell the database that it has a username which is a string, and age, which is an integer type.

NoSQL

On the other hand, NoSQL databases have become quite popular in the last decade. With NoSQL you don't have to define a schema and you can store any arbitrary JSON. This is handy with JavaScript because we can turn any object into a JSON pretty easily. Be careful, because you can never guarantee that the data is consistent, and you can never know what is in the database.

Node.js and MongoDB

There is a common misconception with Node.js what we hear all the time:

"Node.js can only be used with MongoDB (which is the most popular NoSQL database)."

According to my experience, this is not true. There are drivers available for most of the databases, and they also have libraries on NPM. In my opinion, they are as straightforward and easy to use as MongoDB.

Node.js and PostgreSQL

For the sake of simplicity, we're going to use SQL in the following example. My dialect of choice is PostgreSQL.

To have PostgreSQL up and running you have to install it on your computer. If you're on a Mac, you can use homebrew to install PostgreSQL. Otherwise, if you're on Linux, you can install it with your package manager of choice.

Node.js Database Example PostgreSQL

For further information read this excellent guide on getting your first database up and running.

If you're planning to use a database browser tool, I'd recommend the command line program called psql - it's bundled with the PostgreSQL server installation. Here's a small cheat sheet that will come handy if you start using it.

If you don't like the command-line interface, you can use pgAdmin which is an open source GUI tool for PostgreSQL administration.

Note that SQL is a language on its own, we won't cover all of its features, just the simpler ones. To learn more, there are a lot of great courses online that cover all the basics on PostgreSQL.

Node.js Database Interaction

First, we have to create the database we are going to use. To do so, enter the following command in the terminal: createdb node_hero

Then we have to create the table for our users.

CREATE TABLE users(  
  name VARCHAR(20),
  age SMALLINT
);

Finally, we can get back to coding. Here is how you can interact with your database via your Node.js program.

'use strict'

const pg = require('pg')  
const conString = 'postgres://username:[email protected]/node_hero' // make sure to match your own database's credentials

pg.connect(conString, function (err, client, done) {  
  if (err) {
    return console.error('error fetching client from pool', err)
  }
  client.query('SELECT $1::varchar AS my_first_query', ['node hero'], function (err, result) {
    done()

    if (err) {
      return console.error('error happened during query', err)
    }
    console.log(result.rows[0])
    process.exit(0)
  })
})

This was just a simple example, a 'hello world' in PostgreSQL. Notice that the first parameter is a string which is our SQL command, the second parameter is an array of values that we'd like to parameterize our query with.

It is a huge security error to insert user input into databases as they come in. This protects you from SQL Injection attacks, which is a kind of attack when the attacker tries to exploit severely sanitized SQL queries. Always take this into consideration when building any user facing application. To learn more, check out our Node.js Application Security checklist.

Let's continue with our previous example.

app.post('/users', function (req, res, next) {  
  const user = req.body

  pg.connect(conString, function (err, client, done) {
    if (err) {
      // pass the error to the express error handler
      return next(err)
    }
    client.query('INSERT INTO users (name, age) VALUES ($1, $2);', [user.name, user.age], function (err, result) {
      done() //this done callback signals the pg driver that the connection can be closed or returned to the connection pool

      if (err) {
        // pass the error to the express error handler
        return next(err)
      }

      res.send(200)
    })
  })
})

Achievement unlocked: the user is stored in the database! :) Now let's try retrieving them. Next, let’s add a new endpoint to our application for user retrieval.

app.get('/users', function (req, res, next) {  
  pg.connect(conString, function (err, client, done) {
    if (err) {
      // pass the error to the express error handler
      return next(err)
    }
    client.query('SELECT name, age FROM users;', [], function (err, result) {
      done()

      if (err) {
        // pass the error to the express error handler
        return next(err)
      }

      res.json(result.rows)
    })
  })
})

Download the whole Node Hero series as a single pdf

That wasn't that hard, was it?

Now you can run any complex SQL query that you can come up with within your Node.js application.

With this technique, you can store data persistently in your application, and thanks to the hard-working team of the node-postgres module, it is a piece of cake to do so.

We have gone through all the basics you have to know about using databases with Node.js. Now go, and create something yourself.

Try things out and experiment, because that's the best way of becoming a real Node Hero! Practice and be prepared for the next chapter on how to communicate with third-party APIs!

If you have any questions regarding this tutorial or about using Node.js with databases, don't hesitate to ask!


Node Hero - Your First Node.js HTTP Server

This is the 4th post of the tutorial series called Node Hero - in these chapters you can learn how to get started with Node.js and deliver software products using it.

In this chapter, I’ll show how you can fire up a simple Node.js HTTP server and start serving requests.

The http module for your Node.js server

When you start building HTTP-based applications in Node.js, the built-in http/https modules are the ones you will interact with.

Now, let's create your first Node.js HTTP server! We'll need to require the http module and bind our server to the port 3000 to listen on.

// content of index.js
const http = require('http')  
const port = 3000

const requestHandler = (request, response) => {  
  console.log(request.url)
  response.end('Hello Node.js Server!')
}

const server = http.createServer(requestHandler)

server.listen(port, (err) => {  
  if (err) {
    return console.log('something bad happened', err)
  }

  console.log(`server is listening on ${port}`)
})

You can start it with:

$ node index.js

Things to notice here:

  • requestHandler: this function will be invoked every time a request hits the server. If you visit localhost:3000 from your browser, two log messages will appear: one for / and one for favicon.ico
  • if (err): error handling - if the port is already taken, or for any other reason our server cannot start, we get notified here

The http module is very low-level - creating a complex web application using the snippet above is very time-consuming. This is the reason why we usually pick a framework to work with for our projects. There are a lot you can pick from, but these are the most popular ones:

For this and the next chapters we are going to use Express, as you will find the most modules on NPM for Express.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Express

Fast, unopinionated, minimalist web framework for Node.js - http://expressjs.com/

Adding Express to your project is only an NPM install away:

$ npm install express --save

Once you have Express installed, let's see how you can create a similar application as before:

const express = require('express')  
const app = express()  
const port = 3000

app.get('/', (request, response) => {  
  response.send('Hello from Express!')
})

app.listen(port, (err) => {  
  if (err) {
    return console.log('something bad happened', err)
  }

  console.log(`server is listening on ${port}`)
})

The biggest difference what you have to notice here is that Express by default gives you a router. You don't have to check manually for the URL to decide what to do, but instead, you define the application's routing with app.get, app.post, app.put, etc. They are translated to the corresponding HTTP verbs.

One of the most powerful concepts that Express implements is the middleware pattern.

Middlewares

You can think of middlewares as Unix pipelines, but for HTTP requests.

Express middlewares for building a Node.js HTTP server

In the diagram you can see how a request can go through an Express application. It travels to three middlewares. Each can modify it, then based on the business logic either the third middleware can send back a response or it can be a route handler.

In practice, you can do it this way:

const express = require('express')  
const app = express()

app.use((request, response, next) => {  
  console.log(request.headers)
  next()
})

app.use((request, response, next) => {  
  request.chance = Math.random()
  next()
})

app.get('/', (request, response) => {  
  response.json({
    chance: request.chance
  })
})

app.listen(3000)  

Things to notice here:

  • app.use: this is how you can define middlewares - it takes a function with three parameters, the first being the request, the second the response and the third one is the next callback. Calling next signals Express that it can jump to the next middleware or route handler.
  • The first middleware just logs the headers and instantly calls the next one.
  • The seconds one adds an extra property to it - this is one of the most powerful features of the middleware pattern. Your middlewares can append extra data to the request object that downstream middlewares can read/alter.

Error handling

As in all frameworks, getting the error handling right is crucial. In Express you have to create a special middleware function to do so - a middleware with four parameters:

const express = require('express')  
const app = express()

app.get('/', (request, response) => {  
  throw new Error('oops')
})

app.use((err, request, response, next) => {  
  // log the error, for now just console.log
  console.log(err)
  response.status(500).send('Something broke!')
})

Things to notice here:

  • The error handler function should be the last function added with app.use.
  • The error handler has a next callback - it can be used to chain multiple error handlers.

Rendering HTML

So far we have taken a look on how to send JSON responses - it is time to learn how to render HTML the easy way. For that, we are going to use the handlebars package with the express-handlebars wrapper.

First, let's create the following directory structure:

├── index.js
└── views
    ├── home.hbs
    └── layouts
        └── main.hbs

Once you have that, populate index.js with the following snippet:

// index.js
const path = require('path')  
const express = require('express')  
const exphbs = require('express-handlebars')

const app = express()

app.engine('.hbs', exphbs({  
  defaultLayout: 'main',
  extname: '.hbs',
  layoutsDir: path.join(__dirname, 'views/layouts')
}))
app.set('view engine', '.hbs')  
app.set('views', path.join(__dirname, 'views'))  

The code above initializes the handlebars engine and sets the layouts directory to views/layouts. This is the directory where your layouts will be stored.

Once you have this setup, you can put your initial html into the main.hbs - to keep things simple let's go with this one:

<html>  
  <head>
    <title>Express handlebars</title>
  </head>
  <body>
    {{{body}}}
  </body>
</html>  

You can notice the {{{body}}} placeholder - this is where your content will be placed - let's create the home.hbs!

<h2>Hello {{name}}<h2>  

The last thing we have to do to make it work is to add a route handler to our Express application:

app.get('/', (request, response) => {  
  response.render('home', {
    name: 'John'
  })
})

The render method takes two parameters:

  • The first one is the name of the view,
  • and the second is the data you want to render.

Once you call that endpoint you will end up with something like this:

<html>  
  <head>
    <title>Express handlebars</title>
  </head>
  <body>
    <h2>Hello John<h2>
  </body>
</html>  

This is just the tip of the iceberg - to learn how to add more layouts and even partials, please refer to the official express-handlebars documentation.

Debugging Express

In some cases, you may need to see what happens with Express when your application is running. To do so, you can pass the following environment variable to Express: DEBUG=express*.

You have to start your Node.js HTTP server using:

$ DEBUG=express* node index.js

Download the whole Node Hero series as a single pdf

Summary

This is how can you set up your first Node.js HTTP server from scratch. I recommend Express to begin with, then feel free to experiment. Let me know how did it go in the comments.

In the next chapter, you will learn how to retrieve information from databases - subscribe to our newsletter for updates.

In the meantime if you have any questions, don't hesitate to ask!

Node Hero - Understanding Async Programming in Node.js

Node Hero - Understanding Async Programming in Node.js

This is the third post of the tutorial series called Node Hero - in these chapters you can learn how to get started with Node.js and deliver software products using it.

In this chapter, I’ll guide you through async programming principles, and show you how to do async in JavaScript and Node.js.

Synchronous Programming

In traditional programming practice, most I/O operations happen synchronously. If you think about Java, and about how you would read a file using Java, you would end up with something like this:

try(FileInputStream inputStream = new FileInputStream("foo.txt")) {  
    Session IOUtils;
    String fileContent = IOUtils.toString(inputStream);
}

What happens in the background? The main thread will be blocked until the file is read, which means that nothing else can be done in the meantime. To solve this problem and utilize your CPU better, you would have to manage threads manually.

If you have more blocking operations, the event queue gets even worse:

Non-async blocking operations example in Node Hero tutorial series. (The red bars show when the process is waiting for an external resource's response and is blocked, the black bars show when your code is running, the green bars show the rest of the application)

To resolve this issue, Node.js introduced an asynchronous programming model.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Asynchronous programming in Node.js

Asynchronous I/O is a form of input/output processing that permits other processing to continue before the transmission has finished.

In the following example, I will show you a simple file reading process in Node.js - both in a synchronous and asynchronous way, with the intention of show you what can be achieved by avoiding blocking your applications.

Let's start with a simple example - reading a file using Node.js in a synchronous way:

const fs = require('fs')  
let content  
try {  
  content = fs.readFileSync('file.md', 'utf-8')
} catch (ex) {
  console.log(ex)
}
console.log(content)  

What did just happen here? We tried to read a file using the synchronous interface of the fs module. It works as expected - the content variable will contain the content of file.md. The problem with this approach is that Node.js will be blocked until the operation is finished - meaning it can do absolutely nothing while the file is being read.

Let's see how we can fix it!

Asynchronous programming - as we know now in JavaScript - can only be achieved with functions being first-class citizens of the language: they can be passed around like any other variables to other functions. Functions that can take other functions as arguments are called higher-order functions.

One of the easiest example for higher order functions:

const numbers = [2,4,1,5,4]

function isBiggerThanTwo (num) {  
  return num > 2
}

numbers.filter(isBiggerThanTwo)  

In the example above we pass in a function to the filter function. This way we can define the filtering logic.

This is how callbacks were born: if you pass a function to another function as a parameter, you can call it within the function when you are finished with your job. No need to return values, only calling another function with the values.

These so-called error-first callbacks are in the heart of Node.js itself - the core modules are using it as well as most of the modules found on NPM.

const fs = require('fs')  
fs.readFile('file.md', 'utf-8', function (err, content) {  
  if (err) {
    return console.log(err)
  }

  console.log(content)
})

Things to notice here:

  • error-handling: instead of a try-catch block you have to check for errors in the callback
  • no return value: async functions don't return values, but values will be passed to the callbacks

Let's modify this file a little bit to see how it works in practice:

const fs = require('fs')

console.log('start reading a file...')

fs.readFile('file.md', 'utf-8', function (err, content) {  
  if (err) {
    console.log('error happened during reading the file')
    return console.log(err)
  }

  console.log(content)
})

console.log('end of the file')  

The output of this script will be:

start reading a file...  
end of the file  
error happened during reading the file  

As you can see once we started to read our file the execution continued, and the application printed end of the file. Our callback was only called once the file read was finished. How is it possible? Meet the event loop.

The Event Loop

The event loop is in the heart of Node.js / Javascript - it is responsible for scheduling asynchronous operations.

Before diving deeper, let's make sure we understand what event-driven programming is.

Event-driven programming is a programming paradigm in which the flow of the program is determined by events such as user actions (mouse clicks, key presses), sensor outputs, or messages from other programs/threads.

In practice, it means that applications act on events.

Also, as we have already learned in the first chapter, Node.js is single-threaded - from a developer's point of view. It means that you don't have to deal with threads and synchronizing them, Node.js abstracts this complexity away. Everything except your code is executing in parallel.

To understand the event loop more in-depth, continue watching this video:

Async Control Flow

As now you have a basic understanding of how async programming works in JavaScript, let's take a look at a few examples on how you can organize your code.

Async.js

To avoid the so-called Callback-Hell one thing you can do is to start using async.js.

Async.js helps to structure your applications and makes control flow easier.

Let’s check a short example of using Async.js, and then rewrite it by using Promises.

The following snippet maps through three files for stats on them:

async.parallel(['file1', 'file2', 'file3'], fs.stat, function (err, results) {  
    // results is now an array of stats for each file
})

Promises

The Promise object is used for deferred and asynchronous computations. A Promise represents an operation that hasn't completed yet but is expected in the future.

In practice, the previous example could be rewritten as follows:

function stats (file) {  
  return new Promise((resolve, reject) => {
    fs.stat(file, (err, data) => {
      if (err) {
        return reject (err)
      }
      resolve(data)
    })
  })
}

Promise.all([  
  stats('file1'),
  stats('file2'),
  stats('file3')
])
.then((data) => console.log(data))
.catch((err) => console.log(err))

Of course, if you use a method that has a Promise interface, then the Promise example can be a lot less in line count as well.

Download the whole Node Hero series as a single pdf

Next Up: Your First Node.js Server

In the next chapter, you will learn how to fire up your first Node.js HTTP server - subscribe to our newsletter for updates.

In the meantime if you have any questions, don't hesitate to ask!

Node Hero - Using NPM: Tutorial

Node Hero - Using NPM: Tutorial

This is the second post of the tutorial series called Node Hero - in these chapters you can learn how to get started with Node.js and deliver software products using it. In this chapter, you'll learn what NPM is and how to use it. Let's get started!

NPM in a Nutshell

NPM is the package manager used by Node.js applications - you can find a ton of modules here, so you don't have to reinvent the wheel. It is like Maven for Java or Composer for PHP. There are two primary interfaces you will interact with - the NPM website and the NPM command line toolkit.

Both the website and the CLI uses the same registry to show modules and search for them.

The Website

The NPM website can be found at https://npmjs.com. Here you can sign up as a new user or search for packages.

npm website for Node Hero using npm tutorial

The Command Line Interface

To run the CLI you can run it simply with:

npm  

Note, that NPM is bundled with the Node.js binary, so you don't have to install it - however, if you want to use a specific npm version, you can update it. If you want to install npm version 3, you can do that with: npm install [email protected] -g.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant applications using Trace
Learn more

Using NPM: Tutorial

You have already met NPM in the last article on Getting started with Node.js when you created the package.json file. Let's extend that knowledge!

Adding Dependencies

In this section you are going to learn how to add runtime dependencies to your application.

Once you have your package.json file you can add dependencies to your application. Let's add one! Try the following:

npm install lodash --save  

With this single command we achieved two things: first of all, lodash is downloaded and placed into the node_modules folder. This is the folder where all your external dependencies will be put. Usually, you don't want to add this folder to your source control, so if you are using git make sure to add it to the .gitignore file.

This can be a good starting point for your .gitignore

Let's take a look at what happened in the package.json file! A new property called dependencies have appeared:

"dependencies": {
  "lodash": "4.6.1"
}

This means that lodash with version 4.6.1 is now installed and ready to be used. Note, that NPM follows SemVer to version packages.

Given a version number MAJOR.MINOR.PATCH, increment the MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes. For more information: http://semver.org/

As lodash is ready to be used, let's see how we can do so! You can do it in the same way you did it with your own module except now you don't have to define the path, only the name of the module:

// index.js
const _ = require('lodash')

_.assign({ 'a': 1 }, { 'b': 2 }, { 'c': 3 });  
// → { 'a': 1, 'b': 2, 'c': 3 }

Adding Development Dependencies

In this section you are going to learn how to add build-time dependencies to your application.

When you are building web applications, you may need to minify your JavaScript files, concatenating CSS files and so on. The modules that will do it will be only ran during the building of the assets, so the running application doesn't need them.

You can install such scripts with:

npm install mocha --save-dev  

Once you do that, a new section will appear in your package.json file called devDependencies. All the modules you install with --save-dev will be placed there - also, they will be put in the very same node_modules directory.

NPM Scripts

NPM script is a very powerful concept - with the help of them you can build small utilities or even compose complex build systems.

The most common ones are the start and the test scripts. With the start you can define how one should start your application, while test is for running tests. In your package.json they can look something like this:

  "scripts": {
    "start": "node index.js",
    "test": "mocha test",
    "your-custom-script": "echo npm"
  }

Things to notice here:

  • start: pretty straightforward, it just describes the starting point of your application, it can be invoked with npm start
  • test: the purpose of this one is to run your tests - one gotcha here is that in this case mocha doesn't need to be installed globally, as npm will look for it in the node_modules/.bin folder, and mocha will be placed there as well. It can be invoked with: npm test.
  • your-custom-script: anything that you want, you can pick any name. It can be invoked with npm run your-custom-script - don't forget the run part!

Scoped / Private Packages

Originally NPM had a global shared namespace for module names - with more than 250.000 modules in the registry most of the simple names are already taken. Also, the global namespace contains public modules only.

NPM addressed this problem with the introduction of scoped packages. Scoped packages has the following naming pattern:

@myorg/mypackage

You can install scoped packages the same way as you did before:

npm install @myorg/mypackage --save-dev  

It will show up in your package.json in the following way:

"dependencies": {
  "@myorg/mypackage": "^1.0.0"
}

Requiring scoped packages works as expected:

require([email protected]/mypackage')  

For more information refer to the NPM scoped module docs.

Download the whole Node Hero series as a single pdf

Next Up: Async Programming

In the next chapter, you can learn the principles of async programming using callbacks and Promises - subscribe to our newsletter for updates.

In the meantime if you have any questions, don't hesitate to ask!