We have already discussed how you can deploy your Node.jsNode.js is an asynchronous event-driven JavaScript runtime and is the most effective when building scalable network applications. Node.js is free of locks, so there's no chance to dead-lock any process. application to production using Continuous Deployment.
In this chapter we will take a look at what should happen after the code is out there.
Keep it running
Programmer errors will result in the crash of the application. To restart the application after the crash forever may be a good solution (PM2 can be a good alternative – thanks David for pointing it out!).
Installing Forever:
npm install -g forever
After this running your Node.js application is as easy as:
forever start app.js
Easy, huh? 🙂
This approach works really great if your stack contains only Node.js applications. But what happens when you want to use the same tool to monitor/control different processes as well, like Ruby or PHP? You need something more generic.
This is when Supervisord comes into the picture.
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
As Supervisor is written in Python, installing it can be done via:
easy_install supervisor
The only thing left here is to add your Node.js application to Supervisor. Supervisor works with configuration files that can be found in /etc/supervisor/conf.d/
.
A sample Supervisor config might look like this (it should be placed at /etc/supervisor/conf.d/myapi.conf)
[program:my-api]
command=node /home/myuser/myapi/app.js
autostart=true
autorestart=true
environment=NODE_ENV=production
stderr_logfile=/var/log/myapi.err.log
stdout_logfile=/var/log/myapi.out.log
user=myuser
Pay extra attention to the user
part – never ever run your application with superuser rights. More on Node.js Security.
To make all this work we have to instrument Supervisor to take our new configuration into account:
supervisorctl reread
supervisorctl update
That’s it – of course, Supervisor can do a lot more than this, for more information check out the docs.
Is it responding?
Your application may become unresponsive or won’t be able to connect to the database or any other service/resource it needs to work as expected. To be able to monitor these events and respond accordingly your application should expose a healthcheck interface, like GET /healthcheck
. If anything goes well it should return HTTP 200
, if not then HTTP 5**
In some cases the restart of the process will solve this issue. Speaking of Supervisor: httpok is a Supervisor event listener which makes GET
requests to the configured URL. If the check fails or times out, httpok will restart the process.
To enable httpok the following lines have to be placed in supervisord.conf
:
[eventlistener:httpok]
command=httpok -p my-api http://localhost:3000/healthcheck
events=TICK_5
Also, httpok should be on your system PATH
.
Reverse proxy
So far so good: we have our Node.js application running – even after a crash it will be restarted.
As we do not want to run our application using superuser rights, we won’t be able to listen on port 80. What can we do? We can set up port forwarding using iptables, or use a reverse proxy for this.
In this article, we will go with setting up a reverse proxy, as it can provide an additional security layer, as well as offload some tasks from the Node.js application, like:
- nginx can perform SSL encryption, so Node.js does not have to deal with that
- can compress
- serving static content
Our weapon of choice will be nginx. After installing it, navigate to /etc/nginx
. You will place your site specific configurations under sites-available
– to enable them you have to create a symlink in the sites-enabled
directory pointing to the corresponding site in sites-available
.
A simple nginx config will look like this (/etc/nginx/sites-available/my-site
):
server {
listen 80;
server_name my.domain.com;
location / {
proxy_pass http://localhost:3000;
}
}
The only thing left is to tell nginx to reload the configuration:
nginx -s reload
Load balancing
Currently the architecture might look something like this:
So far we have only one instance serving requests – let’s scale up! To do this, we have to create more of these instances and somehow split the load between them.
For this, you can use HAProxy or a CDN with load balancing functionality, so your setup will look something like this:
Still, in this setup HAProxy can become a single point of failure. To eliminate this SPOF, you can use keepalived – all you need is an extra virtual IP address.
Recommended reading
After we have covered how to deploy your Node.js application and how to operate it in the next post here is how to debug/monitor it.