Hírek, események

Download & Update Node.js to the Latest Version! Node v17.1.0 Current / LTS v16.13.0 Direct Links

Node 16 is the LTS version since 2021-10-26, while Node 17 became the Current version from 2021-10-19. The next LTS version, v18 is planned to take over on 2022-10-25.

In this article below, you’ll find changelogs and download / update information regarding Node.js!

Node.js LTS & Current Download for macOS:

Node.js LTS & Current Download for Windows:

For other downloads like Linux libraries, source codes, Docker images, etc.. please visit https://nodejs.org/en/download/

Node.js Release Schedule:

Releases

Node.js v17 is the Current version!

OpenSSL 3.0

Node.js now includes OpenSSL 3.0, specifically quictls/openssl which provides QUIC support. With OpenSSL 3.0 FIPS support is again available using the new FIPS module. For details about how to build Node.js with FIPS support please see BUILDING.md.

While OpenSSL 3.0 APIs should be mostly compatible with those provided by OpenSSL 1.1.1, we do anticipate some ecosystem impact due to tightened restrictions on the allowed algorithms and key sizes.

If you hit an ERR_OSSL_EVP_UNSUPPORTED error in your application with Node.js 17, it’s likely that your application or a module you’re using is attempting to use an algorithm or key size which is no longer allowed by default with OpenSSL 3.0. A command-line option, --openssl-legacy-provider, has been added to revert to the legacy provider as a temporary workaround for these tightened restrictions.

V8 9.5

The V8 JavaScript engine is updated to V8 9.5. This release comes with additional supported types for the Intl.DisplayNames API and Extended timeZoneName options in the Intl.DateTimeFormat API.

Readline Promise API

The readline module provides an interface for reading data from a Readable stream (such as process.stdin) one line at a time.

The following simple example illustrates the basic use of the readline module:

import * as readline from 'node:readline/promises';
import { stdin as input, stdout as output } from 'process';

const rl = readline.createInterface({ input, output });

const answer = await rl.question('What do you think of Node.js? ');

console.log(`Thank you for your valuable feedback: ${answer}`);

rl.close();

Node.js CURRENT v17 Changelogs

Changelog for Node Version 17.1.0 (Current)

  • doc: add VoltrexMaster to collaborators
  • esm: add support for JSON import assertion
  • lib: add unsubscribe method to non-active DC channels
  • lib: add return value for DC channel.unsubscribe
  • v8: multi-tenant promise hook api

Changelog for Node Version 17.0.1 (Current)

Fixed distribution for native addon builds

This release fixes an issue introduced in Node.js v17.0.0, where some V8 headers were missing from the distributed tarball, making it impossible to build native addons. These headers are now included.

Fixed stream issues

  • Fixed a regression in stream.promises.pipeline, which was introduced in version 16.10.0, is fixed. It is now possible again to pass an array of streams to the function.
  • Fixed a bug in stream.Duplex.from, which didn’t work properly when an async generator function was passed to it.

Node.js v16 became the next LTS!

Node.js 16 got promoted to Long-term Support (LTS) in October 2021. Sneak peak from the announcement:

V8 upgraded to V8 9.0

As always a new version of the V8 JavaScript engine brings performance tweaks and improvements as well as keeping Node.js up to date with JavaScript language features. In Node.js v16.0.0, the V8 engine is updated to V8 9.0 — up from V8 8.6 in Node.js 15.

Stable Timers Promises API

The Timers Promises API provides an alternative set of timer functions that return Promise objects, removing the need to use util.promisify().

Some of the recently released features in Node.js 15, which will also be available in Node.js 16, include:

  • Experimental implementation of the standard Web Crypto API
  • npm 7 (v7.10.0 in Node.js v16.0.0)
  • Node-API version 8
  • Stable AbortController implementation based on the AbortController Web API
  • Stable Source Maps v3
  • Web platform atob (buffer.atob(data)) and btoa (buffer.btoa(data)) implementations for compatibility with legacy web platform APIs

Node.js v16 Changelogs

Changelog for Node Version 16.13.0

This release marks the transition of Node.js 16.x into Long Term Support (LTS) with the codename ‘Gallium’. The 16.x release line now moves into “Active LTS” and will remain so until October 2022. After that time, it will move into “Maintenance” until end of life in April 2024.

Changelog for Node Version 16.12.0

Experimental ESM Loader Hooks API:

Node.js ESM Loader hooks have been consolidated to represent the steps involved needed to facilitate future loader chaining:

  1. resolveresolve [+ getFormat]
  2. loadgetFormat + getSource + transformSource

For consistency, getGlobalPreloadCode has been renamed to globalPreload.

A loader exporting obsolete hook(s) will trigger a single deprecation warning (per loader) listing the errant hooks.

Changelog for Node Version 16.11.1

This is a security release. Notable changes:

  • CVE-2021-22959: HTTP Request Smuggling due to spaced in headers (Medium): The http parser accepts requests with a space (SP) right after the header name before the colon. This can lead to HTTP Request Smuggling (HRS).
  • CVE-2021-22960: HTTP Request Smuggling when parsing the body (Medium): The parse ignores chunk extensions when parsing the body of chunked requests. This leads to HTTP Request Smuggling (HRS) under certain conditions.

Changelog for Node Version 16.11.0

  • crypto: update root certificates
  • deps: upgrade npm to 8.0.0, update nghttp2 to v1.45.1, update V8 to 9.4.146.19
  • tools: update certdata.txt

Changelog for Node Version 16.10.0

  • crypto: add rsa-pss keygen parameters
  • deps: upgrade npm to 7.24.0
  • deps: update Acorn to v8.5.0
  • doc: add Ayase-252 to collaborators
  • fs: make open and close stream override optional when unused
  • http: limit requests per connection
    • The maximum number of requests a socket can handle before closing keep alive connection can be set with server.maxRequestsPerSocket.
  • src: add –no-global-search-paths cli option
    • Adds the –no-global-search-paths command-line option to not search modules from global paths like $HOME/.node_modules and $NODE_PATH.
  • src: make napi_create_reference accept symbol
  • stream: add signal support to pipeline generators

Changelog for Node Version 16.9.1

This release fixes a regression introduced by the V8 9.3 update in Node.js 16.9.0.

Changelog for Node Version 16.9.0

Corepack

Node.js now includes Corepack, a script that acts as a bridge between Node.js projects and the package managers they are intended to be used with during development. In practical terms, Corepack will let you use Yarn and pnpm without having to install them – just like what currently happens with npm, which is shipped in Node.js by default.

V8 9.3

V8 is updated to version 9.3, which includes performance improvements and new JavaScript features.

Object.hasOwn

Object.hasOwn is a static alias for Object.prototype.hasOwnProperty.call:

Object.hasOwn({ value: 42 }, 'value'); // Returns `true`.

Error cause

Errors can now be optionally constructed with a cause option, pointing to another error. This adds a cause property on the new error:

const error1 = new Error('Error one');
const error2 = new Error('Error two', { cause: error1 });
// error2.cause === error1

Other Notable Changes

  • crypto: add RSA-PSS params to asymmetricKeyDetails
  • module: support pattern trailers
  • stream: add stream.compose

Changelog for Node Version 16.8.0

  • doc: deprecate type coercion for dns.lookup options
  • stream: add stream.Duplex.from utility
  • stream: add isDisturbed helper
  • util: expose toUSVString 

Changelog for Node Version 16.7.0

  • fs, experimental: add recursive cp method

Changelog for Node Version 16.6.2

This is a security release. Notable Changes:

  • CVE-2021-3672/CVE-2021-22931: Improper handling of untypical characters in domain names: Node.js was vulnerable to Remote Code Execution, XSS, application crashes due to missing input validation of hostnames returned by Domain Name Servers in the Node.js DNS library which can lead to the output of wrong hostnames (leading to Domain Hijacking) and injection vulnerabilities in applications using the library.
  • CVE-2021-22930: Use after free on close http2 on stream canceling: Node.js was vulnerable to a use after free attack where an attacker might be able to exploit memory corruption to change process behavior. This release includes a follow-up fix for CVE-2021-22930 as the issue was not completely resolved by the previous fix.
  • CVE-2021-22939: Incomplete validation of rejectUnauthorized parameter: If the Node.js HTTPS API was used incorrectly and “undefined” was in passed for the “rejectUnauthorized” parameter, no error was returned and connections to servers with an expired certificate would have been accepted.

Changelog for Node Version 16.6.0

This is a security release. Notable Changes:

The V8 engine is updated to version 9.2.230.21.:

It notably introduces the new Array.prototype.at method (also on Typed Arrays and strings):

const array = [1, 2, 3];

console.log(array.at(-1));
// Prints: 3

Other notable changes:

  • CVE-2021-22930: Use after free on close http2 on stream canceling:
    Node.js is vulnerable to a use after free attack where an attacker might be able to exploit the memory corruption, to change process behavior.
  • inspector: mark as stable
  • punycode: add pending deprecation
  • repl: enable –experimental-repl-await /w opt-out

Changelog for Node Version 16.5.0

Experimental Web Streams API: Node.js now exposes an experimental implementation of the Web Streams API.

While it is experimental, the API is not exposed on the global object and is only accessible using the new stream/web core module:

import { ReadableStream, WritableStream } from 'stream/web'; // Or from 'node:stream/web'

Importing the module will emit a single experimental warning per process.

The raw API is implemented and we are now working on its integration with various existing core APIs.

Other notable changes:

  • fs: allow empty string for temp directory prefix
  • deps: upgrade npm to 7.19.1

Changelog for Node Version 16.4.2

Node.js 16.4.1 introduced a regression in the Windows installer on non-English locales that is being fixed in this release. There is no need to download this release if you are not using the Windows installer.

Changelog for Node Version 16.4.1

This is a security release. Vulnerabilities fixed:

  • CVE-2021-22918: libuv upgrade – Out of bounds read (Medium): Node.js is vulnerable to out-of-bounds read in libuv’s uv__idna_toascii() function which is used to convert strings to ASCII. This is called by Node’s dns module’s lookup() function and can lead to information disclosures or crashes.
  • CVE-2021-22921: Windows installer – Node Installer Local Privilege Escalation (Medium): Node.js is vulnerable to local privilege escalation attacks under certain conditions on Windows platforms. More specifically, improper configuration of permissions in the installation directory allows an attacker to perform two different escalation attacks: PATH and DLL hijacking.

Changelog for Node Version 16.4.0

  • async_hooks: stabilize part of AsyncLocalStorage
  • deps: upgrade npm to 7.18.1, update V8 to 9.1.269.36
  • dns: allow --dns-result-order to change default dns verbatim

Changelog for Node Version 16.3.0

  • cli: add -C alias for –conditions flag
  • deps: add workspaces support to npm install commands

Changelog for Node Version 16.2.0

  • async_hooks: use new v8::Context PromiseHook API
  • lib: support setting process.env.TZ on windows
  • module: add support for URL to import.meta.resolve
  • process: add ‘worker’ event
  • util: add util.types.isKeyObject and util.types.isCryptoKey

Changelog for Node Version 16.1.0

fs: allow no-params fsPromises fileHandle read

Changelog for Node Version 16.0.0

  • Stable Timers Promises API: The Timers Promises API provides an alternative set of timer functions that return Promise objects. Added in Node.js v15.0.0, in this release they graduate from experimental status to stable.
  • Toolchain and Compiler Upgrades: Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel (darwin-x64) and ARM (darwin-arm64) architectures the macOS installer (.pkg) will be shipped as a ‘fat’ (multi-architecture) binary.
  • V8 9.0: The V8 JavaScript engine is updated to V8 9.0, including performance tweaks and improvements. This update also brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The indices array is available via the .indices property on match objects when the regular expression has the /d flag.
  • Other Notable Changes:
    • assert: graduate assert.match and assert.doesNotMatch
    • buffer: expose btoa and atob as globals
    • deps: bump minimum ICU version to 68
    • deps: update ICU to 69.1
    • deps: update llhttp to 6.0.0
    • deps: upgrade npm to 7.10.0
    • http: add http.ClientRequest.getRawHeaderNames()
    • lib,src: update cluster to use Parent
    • module: add support for node:‑prefixed require(…) calls
    • perf_hooks: add histogram option to timerify
    • repl: add auto‑completion for node:‑prefixed require(…) calls
    • util: add getSystemErrorMap() impl

Learn More Node.js from RisingStack

At RisingStack we’ve been writing JavaScript / Node tutorials for the community in the past 5 years. If you’re beginner to Node.js, we recommend checking out our Node Hero tutorial series! The goal of this series is to help you get started with Node.js and make sure you understand how to write an application using it.

See all chapters of the Node Hero tutorial series:
  1. Getting Started with Node.js
  2. Using NPM
  3. Understanding async programming
  4. Your first Node.js HTTP server
  5. Node.js database tutorial
  6. Node.js request module tutorial
  7. Node.js project structure tutorial
  8. Node.js authentication using Passport.js
  9. Node.js unit testing tutorial
  10. Debugging Node.js applications
  11. Node.js Security Tutorial
  12. How to Deploy Node.js Applications
  13. Monitoring Node.js Applications

As a sequel to Node Hero, we have completed another series called Node.js at Scale – which focuses on advanced Node / JavaScript topics. Take a look!

Argo CD Kubernetes Tutorial

Usually, when devs set up a CI/CD pipeline for an application hosted on Kubernetes, they handle both the CI and CD parts in one task runner, such as CircleCI or Travis CI. These services offer push-based updates to your deployments, which means that credentials for the code repo and the deployment target must be stored with these services. This method can be problematic if the service gets compromised, e.g. as it happened to CodeShip last year.

Even using services such as GitLab CI and GitHub Actions requires that credentials for accessing your cluster be stored with them. If you’re employing GitOps, to take advantage of using the usual Push to repo -> Review Code -> Merge Code sequence for managing your infrastructure configuration as well, this would also mean access to your whole infrastructure.

It can also be difficult to keep track of how the different deployed environments are drifting from the configuration files stored in the repo, since these external services are not specific to Kubernetes and thus aren’t aware of the status of all the deployed pieces.

Luckily there are tools to help us with these issues. Two of the most known are Argo CD and Flux. They allow credentials to be stored within your Kubernetes cluster, where you have more control over their security. They also offer pull-based deployment with drift detection. Both of these tools solve the same issues, but tackle them from different angles.

Here, we’ll take a deeper look at Argo CD out of the two.

What is Argo CD

Argo CD is a continuous deployment tool that you can install into your Kubernetes cluster. It can pull the latest code from a git repository and deploy it into the cluster – as opposed to external CD services, deployments are pull-based. You can manage updates for both your application and infrastructure configuration with Argo CD. Advantages of such a setup include being able to use credentials from the cluster itself for deployments, which can be stored in secrets or a vault.

Preparation

To try out Argo CD, we’ve also prepared a test project that we’ll deploy to Kubernetes hosted on DigitalOcean. You can grab the example project from our GitLab repository here: https://gitlab.com/risingstack-org/argocd-demo/

Forking the repo will allow you to make changes for yourself, and it can be set up later in Argo CD as the deployment source.

Get doctl from here:

https://docs.digitalocean.com/reference/doctl/how-to/install/

Or, if you’re using a mac, from Homebrew:

brew install doctl

You can use any Kubernetes provider for this tutorial. The two requirements are having a Docker repository and a Kubernetes cluster with access to it. For this tutorial, we chose to go with DigitalOcean for the simplicity of its setup, but most other platforms should work just fine.

We’ll focus on using the web UI for the majority of the process, but you can also opt to use the `doctl` cli tool if you wish. `doctl` can mostly replace `kubectl` as well. `doctl` will only be needed to push our built docker image to the repo that our deployment will have access to.

Helm is a templating engine for Kubernetes. It allows us to define values separately from the structure of the yaml files, which can help with access control and managing multiple environments using the same template.

You can grab Helm here: https://github.com/helm/helm/releases

Or via Homebrew for mac users:

brew install helm

Download the latest Argo CD version from https://github.com/argoproj/argo-cd/releases/latest

If you’re using a mac, you can grab the cli tools from Homebrew:

brew install argocd

DigitalOcean Setup

After logging in, first, create a cluster using the “Create” button on the top right, and selecting Kubernetes. For the purposes of this demo, we can just go with the smallest cluster with no additional nodes. Be sure to choose a data center close to you.

Preparing the demo app

You can find the demo app in the node-app folder in the repo you forked. Use this folder for the following steps to build and push the docker image to the GitLab registry:

docker login registry.gitlab.com

docker build . -t registry.gitlab.com/<substiture repo name here>/demo-app-1

docker push registry.gitlab.com/<substiture repo name here>/demo-app-1

GitLab offers a free image registry with every git repo – even free tier ones. You can use these to store your built image, but be aware that the registry inherits the privacy setting of the git repo, you can’t change them separately.

Once the image is ready, be sure to update the values.yaml file with the correct image url and use helm to generate the resources.yaml file. You can then deploy everything using kubectl:

helm template -f "./helm/demo-app/values.yaml" "./helm/demo-app" > "./helm/demo-app/resources/resources.yaml"

kubectl apply -f helm/demo-app/resources/resources.yaml

The only purpose of these demo-app resources’ is to showcase the ArgoCD UI capabilities, that’s why it also contains an Ingress resource as a plus.

Install Argo CD into the cluster

Argo CD provides a yaml file that installs everything you’ll need and it’s available online. The most important thing here is to make sure that you install it into the `argocd` namespace, otherwise, you’ll run into some errors later and Argo CD will not be usable.

kubectl create namespace argocd

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

From here, you can use Kubernetes port-forwarding to access the UI of Argo CD:

kubectl -n argocd port-forward svc/argocd-server 8080:443

This will expose the service on localhost:8080 – we will use the UI to set up the connection to GitLab, but it could also be done via the command line tool.

Argo CD setup

To log in on the UI, use `admin` as username, and the password retrieved by this command:

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

Once you’re logged in, connect your fork of the demo app repo from the Repositories inside the Settings menu on the left side. Here, we can choose between ssh and https authentication – for this demo, we’ll use https, but for ssh, you’d only need to set up a key pair for use.

argo cd repo connect

Create an API key on GitLab and use it in place of a password alongside your username to connect the repo. An API key allows for some measure of access control as opposed to using your account password.

After successfully connecting the repository, the only thing left is to set up an Application, which will take care of synchronizing the state of our deployment with that described in the GitLab repo.

argo cd tutorial how to set up a new application

You’ll need to choose a branch or a tag to use to monitor. Let’s choose the master branch for now – it should contain the latest stable code anyway. Setting the sync policy to automatic allows for automatic deployments when the git repo is updated, and also provides automatic pruning and self-healing capabilities.

argo cd application setup

Be sure to set the destination cluster to the one available in the dropdown and use the `demo` namespace. If everything is set correctly, Argo CD should now start syncing the deployment state.

Features of Argo CD

From the application view, you can now see the different parts that comprise our demo application.

argo cd app overview

Clicking on any of these parts allows for checking the diff of the deployed config, and the one checked into git, as well as the yaml files themselves separately. The diff should be empty for now, but we’ll see it in action once we make some changes or if you disable automatic syncing.

argo cd container details

You also have access to the logs from the pods here, which can be quite useful – logs are not retained between different pod instances, which means that they are lost on the deletion of a pod, however.

argo cd container logs

It is also possible to handle rollbacks from here, clicking on the “History and Rollback” button. Here, you can see all the different versions that have been deployed to our cluster by commit. 

You can re-deploy any of them using the … menu on the top right, and selecting “Redeploy” – this feature needs automatic deployment to be turned off. However, you’ll be prompted to do so here.

These should cover the most important parts of the UI and what is available in Argo CD. Next up, we’ll take a look at how the deployment update happens when code changes on GitLab.

Updating the deployment

With the setup done, any changes you make to the configuration that you push to the master branch should be reflected on the deployment shortly after.

A very simple way to check out the updating process is to bump up the `replicaCount` in values.yaml to 2 (or more), and run the helm command again to generate the resources.yaml. 

Then, commit and push to master and monitor the update process on the Argo CD UI.

You should see a new event in the demo-app events, with the reason `ScalingReplicaSet`.

argo cd scaling event

You can double-check the result using kubectl, where you should now see two instances of the demo-app running:

kubectl -n demo get pod

There is another branch prepared in the repo, called second-app, which has another app that you can deploy, so you can see some more of the update process and diffs. It is quite similar to how the previous deployment works.

First, you’ll need to merge the second-app branch into master – this will allow the changes to be automatically deployed, as we set it up already. Then, from the node-app-2 folder, build and push the docker image. Make sure to have a different version tag for it, so we can use the same repo!

docker build . -t registry.gitlab.com/<substitute repo name here>/demo-app-2

docker push registry.gitlab.com/<substitute repo name here>/demo-app-2

You can set deployments to manual for this step, to be able to take a better look at the diff before the actual update happens. You can do this from the sync settings part of `App details`.

argo cd sync policy

Generate the updated resources file afterwards, then commit and push it to git to trigger the update in Argo CD:

helm template -f "./helm/demo-app/values.yaml" "./helm/demo-app" > "./helm/demo-app/resources/resources.yaml"

This should result in a diff appearing `App details` -> `Diff` for you to check out. You can either deploy it manually or just turn auto-deploy back.

ArgoCD safeguards you from those resource changes that are drifting from the latest source-controlled version of your code. Let’s try to manually scale up the deployment to 5 instances:

Get the name of the replica set:

kubectl -n demo get rs

Scale it to 5 instances:

kubectl -n demo scale --replicas=5 rs/demo-app-<number>

If you are quick enough, you can catch the changes applied on the ArgoCD Application Visualization as it tries to add those instances. However, ArgoCD will prevent this change, because it would drift from the source controlled version of the deployment. It also scales the deployment down to the defined value in the latest commit (in my example it was set to 3). 

The downscale event can be found under the `demo-app` deployment events, as shown below:

how to scale down kubernetes with argo cd

From here, you can experiment with whatever changes you’d like!

Finishing our ArgoCD Kubernetes Tutorial

This was our quick introduction to using ArgoCD, which can make your GitOps workflow safer and more convenient.

Stay tuned, as we’re planning to take a look at the other heavy-hitter next time: Flux.

This article was written by Janos Kubisch, senior engineer at RisingStack.

React-Native Sound & Animation Tutorial

In this React-Native sound and animation tutorial, you’ll learn tips on how you can add animation and sound effects to your mobile application. We’ll also discuss topics like persisting data with React-Native AsyncStorage.

To showcase how you can do these things, we’ll use our Mobile Game which we’ve been building in the previous 4 episodes of this tutorial series.

Quick recap: In the previous episodes of our React-Native Tutorial Series, we built our React-Native game’s core: you can finally collect points, see them, and even lose.

Now let’s spice things up and make our game enjoyable with music, react native animations & sound effects, then finish off by saving the high score!

react native tutorial animation sound

Adding Sound to our React-Native Game

As you may have noticed, we have a /music and /sfx directory in the assets, but we didn’t quite touch them until now. They are not mine, so let’s just give credit to the creators: the sound effects can be found here, and the music we’ll use are made by Komiku.

We will use the Expo’s built-in Audio API to work with music. We’ll start by working in the Home/index.js to add the main menu theme.

First off, import the Audio API from the ExpoKit:

import { Audio } from 'expo';

Then import the music and start playing it in the componentWillMount():

async componentWillMount() {
  this.backgroundMusic = new Audio.Sound();
  try {
    await this.backgroundMusic.loadAsync(
      require("../../assets/music/Komiku_Mushrooms.mp3")
    );
    await this.backgroundMusic.setIsLoopingAsync(true);
    await this.backgroundMusic.playAsync();
    // Your sound is playing!
  } catch (error) {
    // An error occurred!
  
}

This will load the music, set it to be a loop and start playing it asynchronously.

If an error happens, you can handle it in the catch section – maybe notify the user, console.log() it or call your crash analytics tool. You can read more about how the Audio API works in the background in the related Expo docs.

In the onPlayPress, simply add one line before the navigation:

this.backgroundMusic.stopAsync();

If you don’t stop the music when you route to another screen, the music will continue to play on the next screen, too.

Speaking of other screens, let’s add some background music to the Game screen too, with the same steps, but with the file ../../assets/music/Komiku_BattleOfPogs.mp3.

Spicing Things up with SFX

Along with the music, sound effects also play a vital part in making the game fun. We’ll have one sound effect on the main menu (button tap), and six on the game screen (button tap, tile tap – correct/wrong, pause in/out, lose).

Let’s start with the main menu SFX, and from there, you’ll be able to add the remaining to the game screen by yourself (I hope).

We only need a few lines of code to define a buttonFX object that is an instance of the Audio.Sound(), and load the sound file in the same try-catch block as the background music:

async componentWillMount() {
   this.backgroundMusic = new Audio.Sound();
   this.buttonFX = new Audio.Sound();
   try {
     await this.backgroundMusic.loadAsync(
       require("../../assets/music/Komiku_Mushrooms.mp3")
     );
     await this.buttonFX.loadAsync(
       require("../../assets/sfx/button.wav")
     );
    ...

You only need one line of code to play the sound effect. On the top of the onPlayPress event handler, add the following:

onPlayPress = () => {
   this.buttonFX.replayAsync();
   ...

Notice how I used replayAsync instead of playAsync – it’s because we may use this sound effect more than one time, and if you use playAsync and run it multiple times, it will only play the sound for the first time. It will come in handy later, and it’s also useful for continuing with the Game screen.

It’s easy as one, two, three! Now, do the six sound effects on the game screen by yourself:

  • Button tap
    • ../../assets/sfx/button.wav
    • Play it when pressing the Exit button
  • Tile tap – correct
    • ../../assets/sfx/tile_tap.wav
    • Play it in the onTilePress/good tile block
  • Tile tap – wrong
    • ../../assets/sfx/tile_wrong.wav
    • Play it in the onTilePress/wrong tile block
  • Pause – in
    • ../../assets/sfx/pause_in.wav
    • Play it in the onBottomBarPress/case "INGAME" block
  • Pause – out
    • ../../assets/sfx/pause_out.wav
    • Play it in the onBottomBarPress/case "PAUSED" block
  • Lose
    • ../../assets/sfx/lose.wav
    • Play it in the interval’s if (this.state.timeLeft <= 0) block
    • Also stop the background music with this.backgroundMusic.stopAsync();
    • Don’t forget to start playing the background music when starting the game again. You can do this by adding this.backgroundMusic.replayAsync(); to the onBottomBarPress/case "LOST" block.

Our game is already pretty fun, but it still lacks the shaking animation when we’re touching the wrong tile – thus we are not getting any instant noticeable feedback.

A Primer to React-Native Animations (with example)

Animating is a vast topic, thus we can only cover the tip of the iceberg in this article. However, Apple has a really good WWDC video about designing with animations, and the Human Interface Guidelines is a good resource, too.

We could use a ton of animations in our app (e.g. animating the button size when the user taps it), but we’ll only cover one in this tutorial: The shaking of the grid when the player touches the wrong tile.

This React Native animation example will have several benefits: it’s some sort of punishment (it will take some time to finish), and as I mentioned already, it’s instant feedback when pressing the wrong tile, and it also looks cool.

There are several animation frameworks out there for React-Native, like react-native-animatable, but we’ll use the built-in Animated API for now. If you are not familiar with it yet, be sure to check the docs out.

Adding React-Native Animations to our Game

First, let’s initialize an animated value in the state that we can later use in the style of the grid container:

state = {
  ...
  shakeAnimation: new Animated.Value(0)
};

And for the <View> that contains the grid generator (with the shitton of ternary operators in it), just change <View> to <Animated.View>. (Don’t forget to change the closing tag, too!) Then in the inline style, add left: shakeAnimation so that it looks something like this:

<Animated.View
   style={{
     height: height / 2.5,
     width: height / 2.5,
     flexDirection: "row",
     left: shakeAnimation
  }
>
   {gameState === "INGAME" ?
   ...

Now let’s save and reload the game. While playing, you shouldn’t notice any difference. If you do, you did something wrong – make sure that you followed every step exactly.

Now, go to the onTilePress() handler and at the // wrong tile section you can start animating the grid. In the docs, you’ll see that the basic recommended function to start animating with in React Native is Animated.timing().

You can animate one value to another value by using this method, however, to shake something, you will need multiple, connected animations playing after each other in a sequence. For example modifying it from 0 to 50, then -50, and then back to 0 will create a shake-like effect.

If you look at the docs again, you’ll see that Animated.sequence([]) does exactly this: it plays a sequence of animations after each other. You can pass in an endless number of animations (or Animated.timing()s) in an array, and when you run .play() on this sequence, the animations will start executing.

You can also ease animations with Easing. You can use backbounceease and elastic – to explore them, be sure to check the docs. However, we don’t need them yet as it would really kill the performance now.

Our sequence will look like this:

Animated.sequence([
 Animated.timing(this.state.shakeAnimation, {
   toValue: 50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: -50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: 50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: -50,
   duration: 100
 }),
 Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100
 })
]).start();

This will change the shakeAnimation in the state to 50, -50, 50, -50 and then 0. Therefore, we will shake the grid and then reset to its original position. If you save the file, reload the app and tap on the wrong tile, you’ll hear the sound effect playing and see the grid shaking.

Moving away animations from JavaScript thread to UI thread

Animations are an essential part of every fluid UI, and rendering them with performance efficency in mind is something that every developer needs to strive for.

By default, the Animation API runs on the JavaScript thread, blocking other renders and code execution. This also means that if it gets blocked, the animation will skip frames. Because of this, we want to move animation drivers from the JS thread to the UI thread – and good news is, this can be done with just one line of code with the help of native drivers.

To learn more about how the Animation API works in the background, what exactly are “animation drivers” and why exactly it is more efficient to use them, be sure to check out this blog post, but let’s move forward.

To use native drivers in our app, we only need to add just one property to our animations: useNativeDriver: true.

Before:

Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100
})

After:

Animated.timing(this.state.shakeAnimation, {
   toValue: 0,
   duration: 100,
   useNativeDriver: true
})

And boom, you’re done, great job there!

Now, let’s finish off with saving the high scores.

Persisting Data – Storing the High Scores

In React-Native, you get a simple, unencrypted, asynchronous, and persistent key-value storage system: AsyncStorage.

It’s recommended not to use AsyncStorage while aiming for production, but for a demo project like this, we can use it with ease. If you are aiming for production, be sure to check out other solutions like Realm or SQLite, though.

First off, we should create a new file under utils called storage.js or something like that. We will handle the two operations we need to do – storing and retrieving data – with the AsyncStorage API.

The API has two built-in methods: AsyncStorage.setItem() for storing, and AsyncStorage.getItem() for retrieving data. You can read more about how they work in the docs linked above. For now, the snippet above will be able to fulfill our needs:

import { AsyncStorage } from "react-native";

export const storeData = async (key, value) => {
 try {
   await AsyncStorage.setItem(`@ColorBlinder:${key}`, String(value));
 } catch (error) {
   console.log(error);
 
};

export const retrieveData = async key => {
 try {
   const value = await AsyncStorage.getItem(`@ColorBlinder:${key}`);
   if (value !== null) {
     return value;
   
 } catch (error) {
   console.log(error);
 
};

By adding this, we’ll have two async functions that can be used to store and persist data from the AsyncStorage. Let’s import our new methods and add two keys we’ll persist to the Game screen’s state:

import {
 generateRGB,
 mutateRGB,
 storeData,
 retrieveData
} from "../../utilities";
...
state = {
   points: 0,
   bestPoints: 0, // < new
   timeLeft: 15,
   bestTime: 0, // < new
   ...

And display these values in the bottom bar, next to their corresponding icons:

<View style={styles.bestContainer}>
 <Image
   source={require("../../assets/icons/trophy.png")}
   style={styles.bestIcon}
 />
 <Text style={styles.bestLabel}>{this.state.bestPoints}</Text>
</View>
. . .
<View style={styles.bestContainer}>
 <Image
   source={require("../../assets/icons/clock.png")}
   style={styles.bestIcon}
 />
 <Text style={styles.bestLabel}>{this.state.bestTime}</Text>
</View>

Now, let’s just save the best points first – we can worry about storing the best time later. In the timer, we have an if statement that checks whether we’ve lost already – and that’s the time when we want to update the best point, so let’s just check if your actual points are better than our best yet, and if it is, update the best:

if (this.state.timeLeft <= 0) {
 this.loseFX.replayAsync();
 this.backgroundMusic.stopAsync();
 if (this.state.points > this.state.bestPoints) {
   this.setState(state => ({ bestPoints: state.points }));
   storeData('highScore', this.state.points);
 
 this.setState(me{ gameState: "LOST" });
} else {
...

And when initializing the screen, in the async componentWillMount(), make sure to read in the initial high score and store it in the state so that we can display it later:

retrieveData('highScore').then(val => this.setState({ bestPoints: val || 0 }));

Now, you are storing and retrieving the high score on the game screen – but there’s a high score label on the home screen, too! You can retrieve the data with the same line as now and display it in the label by yourself.

We only need one last thing before we can take a break: storing the highest time that the player can achieve. To do so, you can use the same functions we already use to store the data (but with a different key!), However, we’ll need a bit different technique to check if we need to update the store:

this.interval = setInterval(async () => {
 if (this.state.gameState === "INGAME") {
   if (this.state.timeLeft > this.state.bestTime) {
     this.setState(state => ({ bestTime: state.timeLeft }));
     storeData('bestTime', this.state.timeLeft);
   
. . .

This checks if our current timeLeft is bigger than the best that we achieved yet. At the top of the componentWillMount, don’t forget to retrieve and store the best time along with the high score, too:

retrieveData('highScore').then(val => this.setState({ bestPoints: val || 0 }));
retrieveData('bestTime').then(val => this.setState({ bestTime: val || 0 }));

Now everything’s set. The game is starting to look and feel nice, and the core features are already starting to work well – so from now on, we don’t need too much work to finish the project.

Next up in our React-Native Tutorial

In the next episode of this series, we will look into making our game responsive by testing on devices ranging from iPhone SE to Xs and last but not least, testing on Android. We will also look into improving the developer experience with ESLint and add testing with Jest.

Don’t worry if you still feel a bit overwhelmed, mobile development may be a huge challenge, even if you are already familiar with React – so don’t lose yourself right before the end. Give yourself a rest and check back later for the next episode!

If you want to check out the code that’s been finished as of now – check out the project’s GitHub repo.

In case you’re looking for outsourced development services, don’t hesitate to reach out to RisingStack.

Stripe & JS: Payments Integration Tutorial

In this Stripe & JS tutorial, I’ll show how you can create a simple webshop using Stripe Payments integration, React and Express. We’ll get familiar with the Stripe Dashboard and basic Stripe features such as charges, customers, orders, coupons and so on. Also, you will learn about the usage of webhooks and restricted API keys.

If you read this article, you’ll get familiar with Stripe integration in 15 minutes, so you can leapfrog the process of burying yourself in the official documentation (’cause we already did that for you!)

A little bit about my Stripe experience and the reasons for writing this tutorial: At RisingStack we’ve been working with a client from the US healthcare scene who hired us to create a large-scale webshop they can use to sell their products. During the creation of this Stripe based platform, we spent a lot of time with studying the documentation and figuring out the integration. Not because it is hard, but there’s a certain amount of Stripe related knowledge that you’ll need to internalize.

We’ll build an example app in this tutorial together – so you can learn how to create a Stripe Webshop from the ground up! The example app’s frontend can be found at https://github.com/RisingStack/post-stripe , and its backend at https://github.com/RisingStack/post-stripe-api.

I’ll use code samples from these repo’s in the article below.

Table of contents:

The Basics of Stripe Payments Integration

First of all, what is the promise of Stripe? It is basically a payment provider: you set up your account, integrate it into your application and let the money rain. Pretty simple right? Well, let your finance people decide if it is a good provider or not based on the plans they offer.

If you are here, you are probably more interested in the technicalities of the integration, so I’ll delve into that part. To show you how to use Stripe, we’ll build a simple demo application with it together.

Make it rain

Before we start coding, we need to create a Stripe account. Don’t worry, no credit card is required in this stage. You only need to provide a payment method when you attempt to activate your account.

Go straight to the Stripe Dashboard and hit that Sign up button. Email, name, password… the usual. BOOM! You have a dashboard. You can create, manage and keep track of orders, payment flow, customers… so basically everything you want to know regarding your shop is here.

If you want to create a new coupon or product, you only need to click a few buttons or enter a simple curl command to your terminal, as the Stripe API Doc describes. Of course, you can integrate Stripe into your product so your admins can set them up from your UI, and then integrate and expose it to your customers using Stripe.js.

Another important menu on the dashboard is the Developers section, where we will add our first webhook and create our restricted API keys. We will get more familiar with the dashboard and the API while we implement our demo shop below.

Stripe Payments Integration Dashboard

Creating a Webshop in React with Charges

Let’s create a React webshop with two products: a Banana and Cucumber. What else would you want to buy in a webshop anyways, right?

  • We can use Create React App to get started.
  • We’re going to use Axios for HTTP requests
  • and query-string-object to convert objects to query strings for Stripe requests.
  • We will also need React Stripe Elements, which is a React wrapper for Stripe.js and Stripe Elements. It adds secure credit card inputs and sends the card’s data for tokenization to the Stripe API.

Take my advice: You should never send raw credit card details to your own API, but let Stripe handle the credit card security for you.

You will be able to identify the card provided by the user using the token you got from Stripe.

npx create-react-app webshop
cd webshop
npm install --save react-stripe-elements
npm install --save axios
npm install --save query-string-object

After we’re done with the preparations, we have to include Stripe.js in our application. Just add <script src="https://js.stripe.com/v3/"></script> to the head of your index.html.

Now we are ready to start coding.

First, we have to add a <StripeProvider/> from react-stripe-elements to our root React App component.

Stripe Payments Dashboard API Key

This will give us access to the Stripe object. In the props, we should pass a public access key (apiKey) which is found in the dashboard’s Developers section under the API keys menu as Publishable key.

// App.js
import React from 'react'
import {StripeProvider, Elements} from 'react-stripe-elements'
import Shop from './Shop'

const App = () => {
  return (
    <StripeProvider apiKey="pk_test_xxxxxxxxxxxxxxxxxxxxxxxx">
      <Elements>
        <Shop/>
      </Elements>
    </StripeProvider>
  )
}

export default App

The <Shop/> is the Stripe implementation of our shop form as you can see from import Shop from './Shop'. We’ll go into the details later.

As you can see the <Shop/> is wrapped in <Elements> imported from react-stripe-elements so that you can use injectStripe in your components. To shed some light on this, let’s take a look at our implementation in Shop.js.

// Shop.js
import React, { Component } from 'react'
import { CardElement } from 'react-stripe-elements'
import PropTypes from 'prop-types'
import axios from 'axios'
import qs from 'query-string-object'

const prices = {
  banana: 150,
  cucumber: 100
}

class Shop extends Component {
  constructor(props) {
    super(props)
    this.state = {
      fetching: false,
      cart: {
        banana: 0,
        cucumber: 0
      }
    }
    this.handleCartChange = this.handleCartChange.bind(this)
    this.handleCartReset = this.handleCartReset.bind(this)
    this.handleSubmit = this.handleSubmit.bind(this)
  }

  handleCartChange(evt) {
    evt.preventDefault()
    const cart = this.state.cart
    cart[evt.target.name]+= parseInt(evt.target.value)
    this.setState({cart})
  }

  handleCartReset(evt) {
    evt.preventDefault()
    this.setState({cart:{banana: 0, cucumber: 0}})
  }

  handleSubmit(evt) {
    // TODO
  }

  render () {
    const cart = this.state.cart
    const fetching = this.state.fetching
    return (
      <form onSubmit={this.handleSubmit} style={{width: '550px', margin: '20px', padding: '10px', border: '2px solid lightseagreen', borderRadius: '10px'}}>
        <div>
          Banana {(prices.banana / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}:
          <div>
            <button name="banana" value={1} onClick={this.handleCartChange}>+</button>
            <button name="banana" value={-1} onClick={this.handleCartChange} disabled={cart.banana <= 0}>-</button>
            {cart.banana}
          </div>
        </div>
        <div>
          Cucumber {(prices.cucumber / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}:
          <div>
            <button name="cucumber" value={1} onClick={this.handleCartChange}>+</button>
            <button name="cucumber" value={-1} onClick={this.handleCartChange} disabled={cart.cucumber <= 0}>-</button>
            {cart.cucumber}
          </div>
        </div>
        <button onClick={this.handleCartReset}>Reset Cart</button>
        <div style={{width: '450px', margin: '10px', padding: '5px', border: '2px solid green', borderRadius: '10px'}}>
          <CardElement style={{base: {fontSize: '18px'}}}/>
        </div>
        {!fetching
          ? <button type="submit" disabled={cart.banana === 0 && cart.cucumber === 0}>Purchase</button>
          : 'Purchasing...'
        }
        Price:{((cart.banana * prices.banana + cart.cucumber * prices.cucumber) / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}
      </form>
    )
  }
}

Shop.propTypes = {
  stripe: PropTypes.shape({
    createToken: PropTypes.func.isRequired
  }).isRequired
}

If you take a look at it, the Shop is a simple React form with purchasable elements: Banana and Cucumber, and with a quantity increase/decrease button for each. Clicking the buttons will change their respective amount in this.state.cart.

There is a submit button below, and the current total price of the cart is printed at the very bottom of the form. Price will expect the prices in cents, so we store them as cents, but of course, we want to present them to the user in dollars. We prefer them to be shown to the second decimal place, e.g. $2.50 instead of $2.5. To achieve this, we can use the built-in toLocaleString() function to format the prices.

Now comes the Stripe specific part: we need to add a form element so users can enter their card details. To achieve this, we only need to add <CardElment/> from react-stripe-elements and that’s it. I’ve also added a bit of low effort inline css to make this shop at least somewhat pleasing to the eye.

We also need to use the injectStripe Higher-Order-Component in order to pass the Stripe object as a prop to the <Shop/> component, so we can call Stripe’s createToken() function in handleSubmit to tokenize the user’s card, so they can be charged.

// Shop.js
import { injectStripe } from 'react-stripe-elements'
export default injectStripe(Shop)

Once we receive the tokenized card from Stripe, we are ready to charge it.

For now let’s just keep it simple and charge the card by sending a POST request to https://api.stripe.com/v1/charges with specifying the payment source (this is the token id), the charge amount (of the charge) and the currency as described in the Stripe API.

We need to send the API key in the header for authorization. We can create a restricted API key on the dashboard in the Developers menu. Set the permission for charges to “Read and write” as shown in the screenshot below.

Do not forget:. You should never use your swiss army Secret key on the client!

Stripe Dashboard API-Key Restricted

Let’s take a look at it in action.

// Shop.js
// ...
const stripeAuthHeader = {
  'Content-Type': 'application/x-www-form-urlencoded',
  'Authorization': `Bearer rk_test_xxxxxxxxxxxxxxxxxxxxxxxx`
}

class Shop extends Component {
  // ...
  handleSubmit(evt) {
    evt.preventDefault()
    this.setState({fetching: true})
    const cart = this.state.cart
    
    this.props.stripe.createToken().then(({token}) => {
        const price = cart.banana * prices.banana + cart.cucumber * prices.cucumber
        axios.post(`https://api.stripe.com/v1/charges`, 
        qs.stringify({
          source: token.id,
          amount: price,
          currency: 'usd'
        }),
        { headers: stripeAuthHeader })
        .then((resp) => {
          this.setState({fetching: false})
          alert(`Thank you for your purchase! You card has been charged with: ${(resp.data.amount / 100).toLocaleString('en-US', {style: 'currency', currency: 'usd'})}`)
        })
        .catch(error => {
          this.setState({fetching: false})
          console.log(error)
        })
    }).catch(error => {
      this.setState({fetching: false})
      console.log(error)
    })
  }
  // ...
}

For testing purposes you can use these international cards provided by Stripe.

Looks good, we can already create tokens from cards and charge them, but how should we know who bought what and where should we send the package?

Thats where products and orders come in.

Placing an order with Stripe

Implementing a simple charging method is a good start, but we will need to take it a step further to create orders. To do so, we have to set up a server and expose an API which handles those orders and accepts webhooks from Stripe to process them once they got paid.

We will use express to handle the routes of our API. You can find a list below of a couple of other node packages to get started. Let’s create a new root folder and get started.

npm install express stripe body-parser cors helmet 

The skeleton is a simple express Hello World using CORS so that the browser won’t panic when we try to reach our PI server that resides and Helmet to set a bunch of security headers automatically for us.

// index.js
const express = require('express')
const helmet = require('helmet')
const cors = require('cors')
const app = express()
const port = 3001

app.use(helmet())

app.use(cors({
  origin: [/http:\/\/localhost:\d+$/],
  allowedHeaders: ['Content-Type', 'Authorization'],
  credentials: true
}))

app.get('/api/', (req, res) => res.send({ version: '1.0' }))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))

In order to access Stripe, require Stripe.js and call it straight away with your Secret Key (you can find it in dashboard->Developers->Api keys), we will use stripe.orders.create() for passing the data we receive when the client calls our server to place an order.

The orders will not be paid automatically. To charge the customer we can either use a Source directly such as a Card Token ID or we can create a Stripe Customer.

The added benefit of creating a Stripe customer is that we can track multiple charges, or create recurring charges for them and also instruct Stripe to store the shipping data and other necessary information to fulfill the order.

You probably want to create Customers from Card Tokens and shipping data even when your application already handles users. This way you can attach permanent or seasonal discount to those Customers, allow them to shop any time with a single click and list their orders on your UI.

For now let’s keep it simple anyway and use the Card Token as our Source calling stripe.orders.pay() once the order is successfully created.

In a real-world scenario, you probably want to separate the order creation from payment by exposing them on different endpoints, so if the payment fails the Client can try again later without having to recreate the order. However, we still have a lot to cover, so let’s not overcomplicate things.

// index.js
const stripe = require('stripe')('sk_test_xxxxxxxxxxxxxxxxxxxxxx')

app.post('/api/shop/order', async (req, res) => {
  const order = req.body.order
  const source = req.body.source
  try {
    const stripeOrder = await stripe.orders.create(order)
    console.log(`Order created: ${stripeOrder.id}`)
    await stripe.orders.pay(stripeOrder.id, {source})
  } catch (err) {
    // Handle stripe errors here: No such coupon, sku, ect
    console.log(`Order error: ${err}`)
    return res.sendStatus(404)
  }
  return res.sendStatus(200)
})

Now we’re able to handle orders on the backend, but we also need to implement this on the UI.

First, let’s implement the state of the <Shop/> as an object the Stripe API expects.

You can find out how an order request should look like here. We’ll need an address object with line1, city, state, country, postal_code fields, a name, an email and a coupon field, to get our customers ready for coupon hunting.

// Shop.js
class Shop extends Component {
  constructor(props) {
    super(props)
    this.state = {
      fetching: false,
      cart: {
        banana: 0,
        cucumber: 0
      },
      coupon: '',
      email: '',
      name: '',
      address : {
        line1: '',
        city: '',
        state: '',
        country: '',
        postal_code: ''
      }
    }
    this.handleCartChange = this.handleCartChange.bind(this)
    this.handleCartReset = this.handleCartReset.bind(this)
    this.handleAddressChange = this.handleAddressChange.bind(this)
    this.handleChange = this.handleChange.bind(this)
    this.handleSubmit = this.handleSubmit.bind(this)
  }

  handleChange(evt) {
    evt.preventDefault()
    this.setState({[evt.target.name]: evt.target.value})
  }

  handleAddressChange(evt) {
    evt.preventDefault()
    const address = this.state.address
    address[evt.target.name] = evt.target.value
    this.setState({address})
  }
  // ...
}

Now we are ready to create the input fields. We should, of course, disable the submit button when the input fields are empty. Just the usual deal.

// Shop.js
render () {
  const state = this.state
  const fetching = state.fetching
  const cart = state.cart
  const address = state.address
  const submittable = (cart.banana !== 0 || cart.cucumber !== 0) && state.email && state.name && address.line1 && address.city && address.state && address.country && address.postal_code
  return (
// ...
    <div>Name: <input type="text" name="name" onChange={this.handleChange}/></div>
    <div>Email: <input  type="text" name="email" onChange={this.handleChange}/></div>
    <div>Address Line: <input  type="text" name="line1" onChange={this.handleAddressChange}/></div>
    <div>City: <input  type="text" name="city" onChange={this.handleAddressChange}/></div>
    <div>State: <input  type="text" name="state" onChange={this.handleAddressChange}/></div>
    <div>Country: <input  type="text" name="country" onChange={this.handleAddressChange}/></div>
    <div>Postal Code: <input  type="text" name="postal_code" onChange={this.handleAddressChange}/></div>
    <div>Coupon Code: <input  type="text" name="coupon" onChange={this.handleChange}/></div>
    {!fetching
      ? <button type="submit" disabled={!submittable}>Purchase</button>
      : 'Purchasing...'}
// ...

We also have to define purchasable items.

These items will be identified by a Stock Keeping Unit by Stripe, which can be created on the dashboard as well.

First, we have to create the Products (Banana and Cucumber on dashboard->Orders->Products) and then assign an SKU to them (click on the created product and Add SKU in the Inventory group). An SKU specifies the products including its properties – size, color, quantity, and prices -, so a product can have multiple SKUs.

stripe-payments-dashboard-banana-product-creation

.

Stripe-Payments-Dashboard-Stock-Keeping-Unit

After we created our products and assigned SKUs to them, we add them to the webshop so we can parse up the order.

// Shop.js
const skus = {
  banana: 1,
  cucumber: 2
}

We are ready to send orders to our express API on submit. We do not have to calculate the total price of orders from now on. Stripe can sum it up for us, based on the SKUs, quantities, and coupons.

// Shop.js
handleSubmit(evt) {
  evt.preventDefault()
  this.setState({fetching: true})
  const state = this.state
  const cart = state.cart
  
  this.props.stripe.createToken({name: state.name}).then(({token}) => {
    // Create order
    const order = {
      currency: 'usd',
      items: Object.keys(cart).filter((name) => cart[name] > 0 ? true : false).map(name => {
        return {
          type: 'sku',
          parent: skus[name],
          quantity: cart[name]
        }
      }),
      email: state.email,
      shipping: {
        name: state.name,
        address: state.address
      }
    }
    // Add coupon if given
    if (state.coupon) {
      order.coupon = state.coupon
    }
    // Send order
    axios.post(`http://localhost:3001/api/shop/order`, {order, source: token.id})
    .then(() => {
      this.setState({fetching: false})
      alert(`Thank you for your purchase!`)
    })
    .catch(error => {
      this.setState({fetching: false})
      console.log(error)
    })
  }).catch(error => {
    this.setState({fetching: false})
    console.log(error)
  })
}

Let’s create a coupon for testing purposes. This can be done on the dashboard as well. You can find this option under the Billing menu on the Coupons tab.

There are multiple types of coupons based on their duration, but only coupons with the type Once can be used for orders. The rest of the coupons can be attached to Stripe Customers.

You can also specify a lot of parameters for the coupon you create, such as how many times it can be used, whether it is amount based or percentage based, and when will the coupon expire. Now we need a coupon that can be used only once and provides a reduction on the price by a certain amount.

Stripe-Payments-Dashboard-Coupon-Creation

Great! Now we have our products, we can create orders, and we can also ask Stripe to charge the customer’s card for us. But we are still not ready to ship the products as we have no idea at the moment whether the charge was successful. To get that information, we need to set up webhooks, so Stripe can let us know when the money is on its way.

Stripe-Payments-Shop-Orders

Setting up Stripe Webhooks to Verify Payments

As we discussed earlier, we are not assigning cards but Sources to Customers. The reason behind that is Stripe is capable of using several payment methods, some of which may take days to be verified.

We need to set up an endpoint Stripe can call when an event — such as a successful payment — has happened. Webhooks are also useful when an event is not initiated by us via calling the API, but comes straight from Stripe.

Imagine that you have a subscription service, and you don’t want to charge the customer every month. In this case, you can set up a webhook, and you will get notified when the recurring payment was successful or if it failed.

In this example, we only want to be notified when an order gets paid. When it happens, Stripe can notify us by calling an endpoint on our API with an HTTP request containing the payment data in the request body. At the moment, we don’t have a static IP, but we need a way to expose our local API to the public internet. We can use Ngrok for that. Just download it and run with ./ngrok http 3001 command to get an ngrok url pointing to our localhost:3001.

We also have to set up our webhook on the Stripe dashboard. Go to Developers -> Webhooks, click on Add endpoint and type in your ngrok url followed by the endpoint to be called e.g. http://92832de0.ngrok.io/api/shop/order/process. Then under Filter event select Select types to send and search for order.payment_succeeded.

Stripe-Dashboard-Webhook-Creation

The data sent in the request body is encrypted and can only be decrypted by using a signature sent in the header and with the webhook secret that can be found on the webhooks dashboard.

This also means that we cannot simply use bodyParser to parse the body, so we need to add an exception to bodyParser so it will be bypassed when the URL starts with /api/shop/order/process. We need to use the stripe.webhooks.constructEvent() function instead, provided by the Stripe JavaScript SDK to decrypt the message for us.

// index.js
const bodyParser = require('body-parser')

app.use(bodyParser.json({
  verify: (req, res, buf) => {
    if (req.originalUrl.startsWith('/api/shop/order/process')) {
      req.rawBody = buf.toString()
    }
  }
}))

app.use(bodyParser.urlencoded({
  extended: false
}))

app.post('/api/shop/order/process', async (req, res) => {
  const sig = req.headers['stripe-signature']
  try {
    const event = await stripe.webhooks.constructEvent(req.rawBody, sig, 'whsec_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
    console.log(`Processing Order : ${event.data.object.id}`)
    // Process payed order here
  } catch (err) {
    return res.sendStatus(500)
  }
  return res.sendStatus(200)
})

After an order was successfully paid, we can parse send it to other APIs like Salesforce or Stamps to pack things up and get ready to send out.

Wrapping up our Stripe JS tutorial

My goal with this guide was to provide help to you through the process of creating a webshop using JavaScript & Stripe. I hope you did learn from our experiences and will use this guide when you decide to implement a similar system like this in the future.

In case you need help with Stripe development, you’d like to learn more on how to use the Stripe Api, or you’re just looking for Node & React development in general, feel free to reach out to us on info@risingstack.com or via our Node.js development website.

How to Deploy a Ceph Storage to Bare Virtual Machines

Ceph is a freely available storage platform that implements object storage on a single distributed computer cluster and provides interfaces for object-, block- and file-level storage. Ceph aims primarily for completely distributed operation without a single point of failure. Ceph storage manages data replication and is generally quite fault-tolerant. As a result of its design, the system is both self-healing and self-managing.

Ceph has loads of benefits and great features, but the main drawback is that you have to host and manage it yourself. In this post, we’ll check two different approaches of virtual machine deployment with Ceph.

Anatomy of a Ceph cluster

Before we dive into the actual deployment process, let’s see what we’ll need to fire up for our own Ceph cluster.

There are three services that form the backbone of the cluster

  • ceph monitors (ceph-mon) maintain maps of the cluster state and are also responsible for managing authentication between daemons and clients
  • managers (ceph-mgr) are responsible for keeping track of runtime metrics and the current state of the Ceph cluster
  • object storage daemons (ceph-osd) store data, handle data replication, recovery, rebalancing, and provide some ceph monitoring information.

Additionally, we can add further parts to the cluster to support different storage solutions

  • metadata servers (ceph-mds) store metadata on behalf of the Ceph Filesystem
  • rados gateway (ceph-rgw) is an HTTP server for interacting with a Ceph Storage Cluster that provides interfaces compatible with OpenStack Swift and Amazon S3.

There are multiple ways of deploying these services. We’ll check two of them:

  • first, using the ceph/deploy tool,
  • then a docker-swarm based vm deployment.

Let’s kick it off!

Ceph Setup

Okay, a disclaimer first. As this is not a production infrastructure, we’ll cut a couple of corners.

You should not run multiple different Ceph demons on the same host, but for the sake of simplicity, we’ll only use 3 virtual machines for the whole cluster.

In the case of OSDs, you can run multiple of them on the same host, but using the same storage drive for multiple instances is a bad idea as the disk’s I/O speed might limit the OSD daemons’ performance.

For this tutorial, I’ve created 4 EC2 machines in AWS: 3 for Ceph itself and 1 admin node. For ceph-deploy to work, the admin node requires passwordless SSH access to the nodes and that SSH user has to have passwordless sudo privileges.

In my case, as all machines are in the same subnet on AWS, connectivity between them is not an issue. However, in other cases editing the hosts file might be necessary to ensure proper connection.

Depending on where you deploy Ceph security groups, firewall settings or other resources have to be adjusted to open these ports

  • 22 for SSH
  • 6789 for monitors
  • 6800:7300 for OSDs, managers and metadata servers
  • 8080 for dashboard
  • 7480 for rados gateway

Without further ado, let’s start deployment.

Ceph Storage Deployment

Install prerequisites on all machines

$ sudo apt update
$ sudo apt -y install ntp python

For Ceph to work seamlessly, we have to make sure the system clocks are not skewed. The suggested solution is to install ntp on all machines and it will take care of the problem. While we’re at it, let’s install python on all hosts as ceph-deploy depends on it being available on the target machines.

Prepare the admin node

$ ssh -i ~/.ssh/id_rsa -A ubuntu@13.53.36.123

As all the machines have my public key added to known_hosts thanks to AWS, I can use ssh agent forwarding to access the Ceph machines from the admin node. The first line ensures that my local ssh agent has the proper key in use and the -A flag takes care of forwarding my key.

$ wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
$ sudo apt update
$ sudo apt -y install ceph-deploy

We’ll use the latest nautilus release in this example. If you want to deploy a different version, just change the debian-nautilus part to your desired release (luminous, mimic, etc.).

$ echo "StrictHostKeyChecking no" | sudo tee -a /etc/ssh/ssh_config > /dev/null

OR

$ ssh-keyscan -H 10.0.0.124,10.0.0.216,10.0.0.104 >> ~/.ssh/known_hosts

Ceph-deploy uses SSH connections to manage the nodes we provide. Each time you SSH to a machine that is not in the list of known_hosts (~/.ssh/known_hosts), you’ll get prompted whether you want to continue connecting or not. This interruption does not mesh well with the deployment process, so we either have to use ssh-keyscan to grab the fingerprint of all the target machines or disable the strict host key checking outright.

10.0.0.124 ip-10-0-0-124.eu-north-1.compute.internal ip-10-0-0-124
10.0.0.216 ip-10-0-0-216.eu-north-1.compute.internal ip-10-0-0-216
10.0.0.104 ip-10-0-0-104.eu-north-1.compute.internal ip-10-0-0-104

Even though the target machines are in the same subnet as our admin and they can access each other, we have to add them to the hosts file (/etc/hosts) for ceph-deploy to work properly. Ceph-deploy creates monitors by the provided hostname, so make sure it matches the actual hostname of the machines otherwise the monitors won’t be able to join the quorum and the deployment fails. Don’t forget to reboot the admin node for the changes to take effect.

$ mkdir ceph-deploy
$ cd ceph-deploy

As a final step of the preparation, let’s create a dedicated folder as ceph-deploy will create multiple config and key files during the process.

Deploy resources

$ ceph-deploy new ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

The command ceph-deploy new creates the necessary files for the deployment. Pass it the hostnames of the monitor nodes, and it will create cepf.conf and ceph.mon.keyring along with a log file.

The ceph-conf should look something like this

[global]
fsid = 0572e283-306a-49df-a134-4409ac3f11da
mon_initial_members = ip-10-0-0-124, ip-10-0-0-216, ip-10-0-0-104
mon_host = 10.0.0.124,10.0.0.216,10.0.0.104
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

It has a unique ID called fsid, the monitor hostnames and addresses and the authentication modes. Ceph provides two authentication modes: none (anyone can access data without authentication) or cephx (key based authentication).

The other file, the monitor keyring is another important piece of the puzzle, as all monitors must have identical keyrings in a cluster with multiple monitors. Luckily ceph-deploy takes care of the propagation of the key file during virtual deployments.

$ ceph-deploy install --release nautilus ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

As you might have noticed so far, we haven’t installed ceph on the target nodes yet. We could do that one-by-one, but a more convenient way is to let ceph-deploy take care of the task. Don’t forget to specify the release of your choice, otherwise you might run into a mismatch between your admin and targets.

$ ceph-deploy mon create-initial

Finally, the first piece of the cluster is up and running! create-initial will deploy the monitors specified in ceph.conf we generated previously and also gather various key files. The command will only complete successfully if all the monitors are up and in the quorum.

$ ceph-deploy admin ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

Executing ceph-deploy admin will push a Ceph configuration file and the ceph.client.admin.keyring to the /etc/ceph directory of the nodes, so we can use the ceph CLI without having to provide the ceph.client.admin.keyring each time to execute a command.

At this point, we can take a peek at our cluster. Let’s SSH into a target machine (we can do it directly from the admin node thanks to agent forwarding) and run sudo ceph status.

$ sudo ceph status
  cluster:
	id: 	0572e283-306a-49df-a134-4409ac3f11da
	health: HEALTH_OK

  services:
	mon: 3 daemons, quorum ip-10-0-0-104,ip-10-0-0-124,ip-10-0-0-216 (age 110m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

  data:
  	pools:   0 pools, 0 pgs
objects: 0 objects, 0 B
	usage:   0 B used, 0 B / 0 B avail
pgs:

Here we get a quick overview of what we have so far. Our cluster seems to be healthy and all three monitors are listed under services. Let’s go back to the admin and continue adding pieces.

$ ceph-deploy mgr create ip-10-0-0-124

For luminous+ builds a manager daemon is required. It’s responsible for monitoring the state of the Cluster and also manages modules/plugins.

Okay, now we have all the management in place, let’s add some storage to the cluster to make it actually useful, shall we?

First, we have to find out (on each target machine) the label of the drive we want to use. To fetch the list of available disks on a specific node, run

$ ceph-deploy disk list ip-10-0-0-104

Here’s a sample output:

ceph storage deploy sample output
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-124
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-216
$ ceph-deploy osd create --data /dev/nvme1n1 ip-10-0-0-104

In my case the label was nvme1n1 on all 3 machines (courtesy of AWS), so to add OSDs to the cluster I just ran these 3 commands.

At this point, our cluster is basically ready. We can run ceph status to see that our monitors, managers and OSDs are up and running. But nobody wants to SSH into a machine every time to check the status of the cluster. Luckily there’s a pretty neat dashboard that comes with Ceph, we just have to enable it.

…Or at least that’s what I thought. The dashboard was introduced in luminous release and was further improved in mimic. However, currently we’re deploying nautilus, the latest version of Ceph. After trying the usual way of enabling the dashboard via a manager

$ sudo ceph mgr module enable dashboard

we get an error message saying Error ENOENT: all mgr daemons do not support module 'dashboard', pass --force to force enablement.

Turns out, in nautilus the dashboard package is no longer installed by default. We can check the available modules by running

$ sudo ceph mgr module ls

and as expected, dashboard is not there, it comes in a form a separate package. So we have to install it first, luckily it’s pretty easy.

$ sudo apt install -y ceph-mgr-dashboard

Now we can enable it, right? Not so fast. There’s a dependency that has to be installed on all manager hosts, otherwise we get a slightly cryptic error message saying Error EIO: Module 'dashboard' has experienced an error and cannot handle commands: No module named routes.

$ sudo apt install -y python-routes

We’re all set to enable the dashboard module now. As it’s a public-facing page that requires login, we should set up a cert for SSL. For the sake of simplicity, I’ve just disabled the SSL feature. You should never do this in production, check out the official docs to see how to set up a cert properly. Also, we’ll need to create an admin user so we can log in to our dashboard.

$ sudo ceph mgr module enable dashboard
$ sudo ceph config set mgr mgr/dashboard/ssl false
$ sudo ceph dashboard ac-user-create admin secret administrator

By default, the dashboard is available on the host running the manager on port 8080. After logging in, we get an overview of the cluster status, and under the cluster menu, we get really detailed overviews of each running daemon.

ceph storage deployment dashboard
ceph cluster dashboard

If we try to navigate to the Filesystems or Object Gateway tabs, we get a notification that we haven’t configured the required resources to access these features. Our cluster can only be used as a block storage right now. We have to deploy a couple of extra things to extend its usability.

Quick detour: In case you’re looking for a company that can help you with Ceph, or DevOps in general, feel free to reach out to us at RisingStack!

Using the Ceph filesystem

Going back to our admin node, running

$ ceph-deploy mds create ip-10-0-0-124 ip-10-0-0-216 ip-10-0-0-104

will create metadata servers, that will be inactive for now, as we haven’t enabled the feature yet. First, we need to create two RADOS pools, one for the actual data and one for the metadata.

$ sudo ceph osd pool create cephfs_data 8
$ sudo ceph osd pool create cephfs_metadata 8

There are a couple of things to consider when creating pools that we won’t cover here. Please consult the documentation for further details.

After creating the required pools, we’re ready to enable the filesystem feature

$ sudo ceph fs new cephfs cephfs_metadata cephfs_data

The MDS daemons will now be able to enter an active state, and we are ready to mount the filesystem. We have two options to do that, via the kernel driver or as FUSE with ceph-fuse.

Before we continue with the mounting, let’s create a user keyring that we can use in both solutions for authorization and authentication as we have cephx enabled. There are multiple restrictions that can be set up when creating a new key specified in the docs. For example:

$ sudo ceph auth get-or-create client.user mon 'allow r' mds 'allow r, allow rw path=/home/cephfs' osd 'allow rw pool=cephfs_data' -o /etc/ceph/ceph.client.user.keyring

will create a new client key with the name user and output it into ceph.client.user.keyring. It will provide write access for the MDS only to the /home/cephfs directory, and the client will only have write access within the cephfs_data pool.

Mounting with the kernel

Now let’s create a dedicated directory and then use the key from the previously generated keyring to mount the filesystem with the kernel.

$ sudo mkdir /mnt/mycephfs
$ sudo mount -t ceph 13.53.114.94:6789:/ /mnt/mycephfs -o name=user,secret=AQBxnDFdS5atIxAAV0rL9klnSxwy6EFpR/EFbg==

Attaching with FUSE

Mounting the filesystem with FUSE is not much different either. It requires installing the ceph-fuse package.

$ sudo apt install -y ceph-fuse

Before we run the command we have to retrieve the ceph.conf and ceph.client.user.keyring files from the Ceph host and put the in /etc/ceph. The easiest solution is to use scp.

$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.conf /etc/ceph/ceph.conf
$ sudo scp ubuntu@13.53.114.94:/etc/ceph/ceph.client.user.keyring /etc/ceph/ceph.keyring

Now we are ready to mount the filesystem.

$ sudo mkdir cephfs
$ sudo ceph-fuse -m 13.53.114.94:6789 cephfs

Using the RADOS gateway

To enable the S3 management feature of the cluster, we have to add one final piece, the rados gateway.

$ ceph-deploy rgw create ip-10-0-0-124

For the dashboard, it’s required to create a radosgw-admin user with the system flag to enable the Object Storage management interface. We also have to provide the user’s access_key and secret_key to the dashboard before we can start using it.

$ sudo radosgw-admin user create --uid=rg_wadmin --display-name=rgw_admin --system
$ sudo ceph dashboard set-rgw-api-access-key <access_key>
$ sudo ceph dashboard set-rgw-api-secret-key <secret_key>

Using the Ceph Object Storage is really easy as RGW provides an interface identical to S3. You can use your existing S3 requests and code without any modifications, just have to change the connection string, access, and secret keys.

Ceph Storage Monitoring

The dashboard we’ve deployed shows a lot of useful information about our cluster, but monitoring is not its strongest suit. Luckily Ceph comes with a Prometheus module. After enabling it by running:

$ sudo ceph mgr module enable prometheus

A wide variety of metrics will be available on the given host on port 9283 by default. To make use of these exposed data, we’ll have to set up a prometheus instance.

I strongly suggest running the following containers on a separate machine from your Ceph cluster. In case you are just experimenting (like me) and don’t want to use a lot of VMs, make sure you have enough memory and CPU left on your virtual machine before firing up docker, as it can lead to strange behaviour and crashes if it runs out of resources.

There are multiple ways of firing up Prometheus, probably the most convenient is with docker. After installing docker on your machine, create a prometheus.yml file to provide the endpoint where it can access our Ceph metrics.

# /etc/prometheus.yml

scrape_configs:
  - job_name: 'ceph'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
    - targets: ['13.53.114.94:9283]

Then launch the container itself by running:

$ sudo docker run -p 9090:9090 -v /etc/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

Prometheus will start scraping our data, and it will show up on its dashboard. We can access it on port 9090 on its host machine. Prometheus dashboard is great but does not provide a very eye-pleasing dashboard. That’s the main reason why it’s usually used in pair with Graphana, which provides awesome visualizations for the data provided by Prometheus. It can be launched with docker as well.

$ sudo docker run -d -p 3000:3000 grafana/grafana

Grafana is fantastic when it comes to visualizations, but setting up dashboards can be a daunting task. To make our lives easier, we can load one of the pre-prepared dashboards, for example this one.

ceph storage grafana monitoring

Ceph Deployment: Lessons Learned & Next Up

CEPH can be a great alternative to AWS S3 or other object storages when running in the public operating your service in the private cloud is simply not an option. The fact that it provides an S3 compatible interface makes it a lot easier to port other tools that were written with a “cloud first” mentality. It also plays nicely with Prometheus, thus you don’t need to worry about setting up proper monitoring for it, or you can swap it a more simple, more battle-hardened solution such as Nagios.

In this article, we deployed CEPH to bare virtual machines, but you might need to integrate it into your Kubernetes or Docker Swarm cluster. While it is perfectly fine to install it on VMs next to your container orchestration tool, you might want to leverage the services they provide when you deploy your CEPH cluster. If that is your use case, stay tuned for our next post covering CEPH where we’ll take a look at the black magic required to use CEPH on Docker Swarm and Kubernetes.

In the next CEPH tutorial which we’ll release next week, we’re going to take a look at valid ceph storage alternatives with Docker or with Kubernetes.

PS: Feel free to reach out to us at RisingStack in case you need help with Ceph or Ops in general!

Getting Started with Ansible Tutorial – Automate your Infrastructure

This Ansible tutorial teaches the basics our favorite open-source software provisioning, configuration management, and application-deployment tool.

First, we’ll discuss the Infrastructure as Code concept, and we’ll also take a thorough look at the currently available IaC tool landscape. Then, we’ll dive deep into what is Ansible, how it works, and what are the best practices for its installation and configuration.

You’ll also learn how to automate your infrastructure with Ansible in an easy way through a Raspberry Pi fleet management example.

Table of contents:

Okay, let’s start with understanding the IaC Concept!

What is Infrastructure as Code?

Since the dawn of complex Linux server architectures, the way of configuring servers was either by using the command line, or by using bash scripts. However, the problem with bash scripts is that they are quite difficult to read, but more importantly, using bash scripts is a completely imperative way.

When relying on bash scripts, implementation details or small differences between machine states can break the configuration process. There’s also the question of what happens if someone SSH-s into the server, configures something through the command line, then later someone would try to run a script, expecting the old state.

The script might run successfully, simply break, or things could completely go haywire. No one can tell.

To alleviate the pain caused by the drawbacks of defining our server configurations by bash scripts, we needed a declarative way to apply idempotent changes to the servers’ state, meaning that it does not matter how many times we run our script, it should always result in reaching the exact same expected state.

This is the idea behind the Infrastructure as Code (IaC) concept: handling the state of infrastructure through idempotent changes, defined with an easily readable, domain-specific language.

What are these declarative approaches?

First, Puppet was born, then came Chef. Both of them were responses to the widespread adoption of using clusters of virtual machines that need to be configured together.

Both Puppet and Chef follow the so-called “pull-based” method of configuration management. This means that you define the configuration – using their respective domain-specific language- which is stored on a server. When new machines are spun up, they need to have a configured client that pulls the configuration definitions from the server and applies it to itself.

Using their domain-specific language was definitely clearer and more self-documenting than writing bash scripts. It is also convenient that they apply the desired configuration automatically after spinning up the machines.

However, one could argue that the need for a preconfigured client makes them a bit clumsy. Also, the configuration of these clients is still quite complex, and if the master node which stores the configurations is down, all we can do is to fall back to the old command line / bash script method if we need to quickly update our servers.

ansible tutorial for beginners by risingstack

To avoid a single point of failure, Ansible was created.

Ansible, like Puppet and Chef, sports a declarative, domain-specific language, but in contrast to them, Ansible follows a “push-based” method. That means that as long as you have Python installed, and you have an SSH server running on the hosts you wish to configure, you can run Ansible with no problem. We can safely say that expecting SSH connectivity from a server is definitely not inconceivable.

Long story short, Ansible gives you a way to push your declarative configuration to your machines.

Later came SaltStack. It also follows the push-based approach, but it comes with a lot of added features, and with it, a lot of added complexity both usage, and maintenance-wise.

Thus, while Ansible is definitely not the most powerful of the four most common solutions, it is hands down the easiest to get started with, and it should be sufficient to cover 99% of conceivable use-cases.

If you’re just getting started in the world of IaC, Ansible should be your starting point, so let’s stick with it for now.

Other IaC tools you should know about

While the above mentioned four (Pupper, Chef, Salt, Ansible) handles the configuration of individual machines in bulk, there are other IaC tools that can be used in conjunction with them. Let’s quickly list them for the sake of completeness, and so that you don’t get lost in the landscape.

Vagrant: It has been around for quite a while. Contrary to Puppet, Chef, Ansible, and Salt, Vagrant gives you a way to create blueprints of virtual machines. This also means that you can only create VMs using Vagrant, but you cannot modify them. So it can be a useful companion to your favorite configuration manager, to either set up their client, or SSH server, to get them started.

Terraform: Vagrant comes handy before you can use Ansible, if you maintain your own fleet of VMs. If you’re in the cloud, Terraform can be used to declaratively provision VMs, setup networks, or basically anything you can handle with the UI, API, or CLI of your favorite cloud provider. Feature support may vary, depending on the actual provider, and they mostly come with their own IaC solutions as well, but if you prefer not to be locked in to a platform, Terraform might be the best solution to go with.

Kubernetes: Container orchestration systems are considered Infrastructure as Code, as especially with Kubernetes, you have control over the internal network, containers, a lot of aspects of the actual machines, basically it’s more like an OS on it’s own right than anything. However, it requires you to have a running cluster of VMs with Kubernetes installed and configured.

All in all, you can use either Vagrant or Terraform to lay the groundwork for your fleet of VMs, then use Ansible, Puppet, Chef or Salt to handle their configuration continuously. Finally, Kubernetes can give you a way to orchestrate your services on them.

Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.

We’ve previously written a lot about Kubernetes, so this time we’ll take one step and take a look at our favorite remote configuration management tool:

What is Ansible?

Let’s take apart what we already know:

Ansible is a push-based IaC, providing a user-friendly domain-specific language so you can define your desired architecture in a declarative way.

Being push-based means that Ansible uses SSH for communicating between the machine that runs Ansible and the machines the configuration is being applied to.

The machines we wish to configure using Ansible are called managed nodes or Ansible hosts. In Ansible’s terminology, the list of hosts is called an inventory.

The machine that reads the definition files and runs Ansible to push the configuration to the hosts is called a control node.

How to Install Ansible

It is enough to install Ansible only on one machine, the control node.

Control node requirements are the following:

  • Python 2 (version 2.7) or Python 3 (versions 3.5 and higher) installed
  • Windows is not supported as a control node, but you can set it up on Windows 10 using WSL
  • Managed nodes also need Python to be installed.

RHEL and CentOS

sudo yum install ansible

Debian based distros and WSL

sudo apt update
sudo apt install software-properties-common
sudo apt-add-repository --yes --update ppa:ansible/ansible
sudo apt install ansible

MacOS

The preferred way to install Ansible on a Mac is via pip.

pip install --user ansible

Run the following Ansible command to verify the installation:

ansible --version

Ansible Setup, Configuration, and Automation Tutorial

For the purposes of this tutorial, we’ll set up a Raspberry Pi with Ansible, so even if the SD card gets corrupted, we can quickly set it up again and continue working with it.

  1. Flash image (Raspbian)
  2. Login with default credentials (pi/raspberry)
  3. Change default password
  4. Set up passwordless SSH
  5. Install packages you want to use

With Ansible, we can automate the process.

Let’s say we have a couple of Raspberry Pis, and after installing the operating system on them, we need the following packages to be installed on all devices:

  • vim
  • wget
  • curl
  • htop

We could install these packages one by one on every device, but that would be tedious. Let Ansible do the job instead.

First, we’ll need to create a project folder.

mkdir bootstrap-raspberry && cd bootstrap-raspberry

We need a config file and a hosts file. Let’s create them.

touch ansible.cfg
touch hosts 		// file extension not needed 

Ansible can be configured using a config file named ansible.cfg. You can find an example with all the options here.

Security risk: if you load ansible.cfg from a world-writable folder, another user could place their own config file there and run malicious code. More about that here.

The lookup order of the configuration file will be searched for in the following order:

  1. ANSIBLE_CONFIG (environment variable if set)
  2. ansible.cfg (in the current directory)
  3. ~/.ansible.cfg (in the home directory)
  4. /etc/ansible/ansible.cfg

So if we have an ANSIBLE_CONFIG environment variable, Ansible will ignore all the other files(2., 3., 4.). On the other hand, if we don’t specify a config file, /etc/ansible/ansible.cfg will be used.

Now we’ll use a very simple config file with contents below:

[defaults]
inventory = hosts
host_key_checking = False

Here we tell Ansible that we use our hosts file as an inventory and to not check host keys. Ansible has host key checking enabled by default. If a host is reinstalled and has a different key in the known_hosts file, this will result in an error message until corrected. If a host is not initially in known_hosts this will result in prompting for confirmation interactively which is not favorable if you want to automate your processes.

Now let’s open up the hosts file:

[raspberries]
192.168.0.74
192.168.0.75
192.168.0.76


[raspberries:vars]
ansible_connection=ssh  
ansible_user=pi 
ansible_ssh_pass=raspberry

We list the IP address of the Raspberry Pis under the [raspberries] block and then assign variables to them.

  • ansible_connection: Connection type to the host. Defaults to ssh. See other connection types here
  • ansible_user: The user name to use when connecting to the host
  • ansible_ssh_password: The password to use to authenticate to the host

Creating an Ansible Playbook

Now we’re done with the configuration of Ansible. We can start setting up the tasks we would like to automate. Ansible calls the list of these tasks “playbooks”.

In our case, we want to:

  1. Change the default password,
  2. Add our SSH public key to authorized_keys,
  3. Install a few packages.

Meaning, we’ll have 3 tasks in our playbook that we’ll call pi-setup.yml.

By default, Ansible will attempt to run a playbook on all hosts in parallel, but the tasks in the playbook are run serially, one after another.

Let’s take a look at our pi-setup.yml as an example:

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:
    - name: Change password for default user
      user:
        name: '"{{ item.name }}"'
        password: '"{{ item.password | password_hash('sha512') }}"'
        state: present
      loop:
        - '"{{ user }}"'
    - name: Add SSH public key
      authorized_key:
        user: '"{{ item.name }}"'
        key: '"{{ item.ssh_key }}"'
      loop:
        - '"{{ user }}"'
    - name: Ensure a list of packages installed
      apt:
        name: '"{{ packages }}"'
        state: present
    - name: All done!
      debug:
        msg: Packages have been successfully installed

Tearing down our Ansible Playbook Example

Let’s tear down this playbook.

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:  [ … ]

This part defines fields that are related to the whole playbook:

  1. hosts: all: Here we tell Ansible to execute this playbook on all hosts defined in our hostfile.
  2. become: yes: Execute commands as sudo user. Ansible uses privilege escalation systems to execute tasks with root privileges or with another user’s permissions. This lets you become another user, hence the name.
  3. vars: User defined variables. Once you’ve defined variables, you can use them in your playbooks using the Jinja2 templating system.There are other sources vars can come from, such as variables discovered from the system. These variables are called facts.
  4. tasks: List of commands we want to execute

Let’s take another look at the first task we defined earlier without addressing the user modules’ details. Don’t fret if it’s the first time you hear the word “module” in relation to Ansible, we’ll discuss them in detail later.

tasks:
  - name: Change password for default user
    user:
      name: '"{{ item.name }}"'
      password: '"{{ item.password | password_hash('sha512') }}"'
      state: present
    loop:
      - '"{{ user }}"'
  1. name: Short description of the task making our playbook self-documenting.
  2. user: The module the task at hand configures and runs. Each module is an object encapsulating a desired state. These modules can control system resources, services, files or basically anything. For example, the documentation for the user module can be found here. It is used for managing user accounts and user attributes.
  3. loop: Loop over variables. If you want to repeat a task multiple times with different inputs, loops come in handy. Let’s say we have 100 users defined as variables and we’d like to register them. With loops, we don’t have to run the playbook 100 times, just once.

Understanding the Ansible User Module

Zooming in on the user module:

user:
  name: '"{{ item.name }}"'
  password: '"{{ item.password | password_hash('sha512') }}"'
  state: present
loop:
  - '"{{ user }}"'

Ansible comes with a number of modules, and each module encapsulates logic for a specific task/service. The user module above defines a user and its password. It doesn’t matter if it has to be created or if it’s already present and only its password needs to be changed, Ansible will handle it for us.

Note that Ansible will only accept hashed passwords, so either you provide pre-hashed characters or – as above – use a hashing filter.

Are you looking for expert help with infrastructure related issues or project? Check out our DevOps and Infrastructure related services, or reach out to us at info@risingstack.com.

For the sake of simplicity, we stored our user’s password in our example playbook, but you should never store passwords in playbooks directly. Instead, you can use variable flags when running the playbook from CLI or use a password store such as Ansible Vault or the 1Password module .

Most modules expose a state parameter, and it is best practice to explicitly define it when it’s possible. State defines whether the module should make something present (add, start, execute) or absent (remove, stop, purge). Eg. create or remove a user, or start / stop / delete a Docker container.

Notice that the user module will be called at each iteration of the loop, passing in the current value of the user variable . The loop is not part of the module, it’s on the outer indentation level, meaning it’s task-related.

The Authorized Keys Module

The authorized_keys module adds or removes SSH authorized keys for a particular user’s account, thus enabling passwordless SSH connection.

- name: Add SSH public key
  authorized_key:
    user: '"{{ item.name }}"'
    key: '"{{ item.ssh_key }}"'

The task above will take the specified key and adds it to the specified user’s ~/.ssh/authorized_keys file, just as you would either by hand, or using ssh-copy-id.

The Apt module

We need a new vars block for the packages to be installed.

vars:
  packages:
    - vim
    - wget
    - curl
    - htop

tasks:
  - name: Ensure a list of packages installed
    apt:
      name: '"{{ packages }}"'
      state: present

The apt module manages apt packages (such as for Debian/Ubuntu). The name field can take a list of packages to be installed. Here, we define a variable to store the list of desired packages to keep the task cleaner, and this also gives us the ability to overwrite the package list with command-line arguments if we feel necessary when we apply the playbook, without editing the actual playbook.

The state field is set to be present, meaning that Ansible should install the package if it’s missing, or skip it, if it’s already present. In other words, it ensures that the package is present. It could be also set to absent (ensure that it’s not there), latest (ensure that it’s there and it’s the latest version, build-deps (ensure that it’s build dependencies are present), or fixed (attempt to correct a system with broken dependencies in place).

Let’s run our Ansible Playbook

Just to reiterate, here is the whole playbook together:

- hosts: all
  become: 'yes'
  vars:
    user:
      - name: "pi"
        password: "secret"
        ssh_key: "ssh-rsa …"
    packages:
      - vim
      - wget
      - curl
      - htop
  tasks:
    - name: Change password for default user
      user:
        name: '"{{ item.name }}"'
        password: '"{{ item.password | password_hash('sha512') }}"'
        state: present
      loop:
        - '"{{ user }}"'
    - name: Add SSH public key
      authorized_key:
        user: '"{{ item.name }}"'
        key: '"{{ item.ssh_key }}"'
      loop:
        - '"{{ user }}"'
    - name: Ensure a list of packages installed
      apt:
        name: '"{{ packages }}"'
        state: present
    - name: All done!
      debug:
        msg: Packages have been successfully installed

Now we’re ready to run the playbook:

ansible-playbook pi-setup.yml

Or we can run it with overwriting the config file:

$ ANSIBLE_HOST_KEY_CHECKING=False

$ ansible-playbook - i “192.168.0.74, 192.168.0.75” ansible_user=john  ansible_ssh_pass=johnspassword” -e  ‘{“user”: [{ “name”: “pi”, “password”: “raspberry”, “state”: “present” }] }’ -e '{"packages":["curl","wget"]}' pi-setup.yml

The command-line flags used in the snippet above are:

  • -i (inventory): specifies the inventory. It can either be a comma-separated list as above, or an inventory file.
  • -e (or –extra-vars): variables can be added or overridden through this flag. In our case we are overwriting the configuration laid out in our hosts file (ansible_useransible_ssh_pass) and the variables user and packages that we have previously set up in our playbook.

What to use Ansible for

Of course, Ansible is not used solely for setting up home-made servers.

Ansible is used to manage VM fleets in bulk, making sure that each newly created VM has the same configuration as the others. It also makes it easy to change the configuration of the whole fleet together by applying a change to just one playbook.

But Ansible can be used for a plethora of other tasks as well. If you have just a single server running in a cloud provider, you can define its configuration in a way that others can read and use easily. You can also define maintenance playbooks as well, such as creating new users and adding the SSH key of new employees to the server, so they can log into the machine as well.

Or you can use AWX or Ansible Tower to create a GUI based Linux server management system that provides a similar experience to what Windows Servers provide.

Stay tuned and make sure to reach out to us in case you’re looking for DevOps, SRE or Cloud Consulting Services

D3.js Bar Chart Tutorial: Build Interactive JavaScript Charts and Graphs

Recently, we had the pleasure to participate in a machine learning project that involved libraries like React and D3.js. Among many tasks, I developed a few d3 bar charts and line charts that helped to process the result of ML models like Naive Bayes.

In this article, I would like to present my progress with D3.js so far and show the basic usage of this javascript chart library through the simple example of a bar chart.

After reading this article, you’ll learn how to create D3.js charts like this easily:

d3-js-tutorial-bar-chart-made-with-javascript-small

The full source code is available here.

We at RisingStack are fond of the JavaScript ecosystem, backend, and front-end development as well. Personally, I am interested in both of them. On the backend, I can see through the underlying business logic of an application while I also have the opportunity to create awesome looking stuff on the front-end. That’s where D3.js comes into the picture!

Update: a 2nd part of my d3.js tutorial series is available as well: Building a D3.js Calendar Heatmap (to visualize StackOverflow Usage Data)

What is D3.js?

D3.js is a data driven JavaScript library for manipulating DOM elements.

“D3 helps you bring data to life using HTML, SVG, and CSS. D3’s emphasis on web standards gives you the full capabilities of modern browsers without tying yourself to a proprietary framework, combining powerful visualization components and a data-driven approach to DOM manipulation.” – d3js.org

Why would You create charts with D3.js in the first place? Why not just display an image?

Well, charts are based on information coming from third-party resources which requires dynamic visualization during render time. Also, SVG is a very powerful tool which fits well to this application case.

Let’s take a detour to see what benefits we can get from using SVG.

The benefits of SVG

SVG stands for Scalable Vector Graphics which is technically an XML based markup language.

It is commonly used to draw vector graphics, specify lines and shapes or modify existing images. You can find the list of available elements here.

Pros:

  • Supported in all major browsers;
  • It has DOM interface, requires no third-party lib;
  • Scalable, it can maintain high resolution;
  • Reduced size compared to other image formats.

Cons:

  • It can only display two-dimensional images;
  • Long learning curve;
  • Render may take long with compute-intensive operations.

Despite its downsides, SVG is a great tool to display icons, logos, illustrations or in this case, charts.

Getting started with D3.js

I picked barcharts to get started because it represents a low complexity visual element while it teaches the basic application of D3.js itself. This should not deceive You, D3 provides a great set of tools to visualize data. Check out its github page for some really nice use cases!

A bar chart can be horizontal or vertical based on its orientation. I will go with the vertical one in the form of a JavaScript Column chart.

On this diagram, I am going to display the top 10 most loved programming languages based on Stack Overflow’s 2018 Developer Survey result.

How to draw bar graphs with SVG?

SVG has a coordinate system that starts from the top left corner (0;0). Positive x-axis goes to the right, while the positive y-axis heads to the bottom. Thus, the height of the SVG has to be taken into consideration when it comes to calculating the y coordinate of an element.

d3-js-bar-chart-tutorial-javascript-coordinate

That’s enough background check, let’s write some code!

I want to create a chart with 1000 pixels width and 600 pixels height.

<body>
	<svg />
</body>
<script>
    const margin = 60;
    const width = 1000 - 2 * margin;
    const height = 600 - 2 * margin;

    const svg = d3.select('svg');
</script>

In the code snippet above, I select the created <svg> element in the HTML file with d3 select. This selection method accepts all kind of selector strings and returns the first matching element. Use selectAll if You would like to get all of them.

I also define a margin value which gives a little extra padding to the chart. Padding can be applied with a <g> element translated by the desired value. From now on, I draw on this group to keep a healthy distance from any other contents of the page.

const chart = svg.append('g')
    .attr('transform', `translate(${margin}, ${margin})`);

Adding attributes to an element is as easy as calling the attr method. The method’s first parameter takes an attribute I want to apply to the selected DOM element. The second parameter is the value or a callback function that returns the value of it. The code above simply moves the start of the chart to the (60;60) position of the SVG.

Supported D3.js input formats

To start drawing, I need to define the data source I’m working from. For this tutorial, I use a plain JavaScript array which holds objects with the name of the languages and their percentage rates but it’s important to mention that D3.js supports multiple data formats.

The library has built-in functionality to load from XMLHttpRequest, .csv files, text files etc. Each of these sources may contain data that D3.js can use, the only important thing is to construct an array out of them. Note that, from version 5.0 the library uses promises instead of callbacks for loading data which is a non-backward compatible change.

Scaling, Axes

Let’s go on with the axes of the chart. In order to draw the y-axis, I need to set the lowest and the highest value limit which in this case are 0 and 100.

I’m working with percentages in this tutorial, but there are utility functions for data types other than numbers which I will explain later.

I have to split the height of the chart between these two values into equal parts. For this, I create something that is called a scaling function.

const yScale = d3.scaleLinear()
    .range([height, 0])
    .domain([0, 100]);

Linear scale is the most commonly known scaling type. It converts a continuous input domain into a continuous output range. Notice the range and domain method. The first one takes the length that should be divided between the limits of the domain values.

Remember, the SVG coordinate system starts from the top left corner that’s why the range takes the height as the first parameter and not zero.

Creating an axis on the left is as simple as adding another group and calling d3’s axisLeft method with the scaling function as a parameter.

chart.append('g')
    .call(d3.axisLeft(yScale));

Now, continue with the x-axis.

const xScale = d3.scaleBand()
    .range([0, width])
    .domain(sample.map((s) => s.language))
    .padding(0.2)

chart.append('g')
    .attr('transform', `translate(0, ${height})`)
    .call(d3.axisBottom(xScale));
d3-js-tutorial-bar-chart-labels

Be aware that I use scaleBand for the x-axis which helps to split the range into bands and compute the coordinates and widths of the bars with additional padding.

D3.js is also capable of handling date type among many others. scaleTime is really similar to scaleLinear except the domain is here an array of dates.

Tutorial: Bar drawing in D3.js

Think about what kind of input we need to draw the bars. They each represent a value which is illustrated with simple shapes, specifically rectangles. In the next code snippet, I append them to the created group element.

chart.selectAll()
    .data(goals)
    .enter()
    .append('rect')
    .attr('x', (s) => xScale(s.language))
    .attr('y', (s) => yScale(s.value))
    .attr('height', (s) => height - yScale(s.value))
    .attr('width', xScale.bandwidth())

First, I selectAll elements on the chart which returns with an empty result set. Then, data function tells how many elements the DOM should be updated with based on the array length. enter identifies elements that are missing if the data input is longer than the selection. This returns a new selection representing the elements that need to be added. Usually, this is followed by an append which adds elements to the DOM.

Basically, I tell D3.js to append a rectangle for every member of the array.

Now, this only adds rectangles on top of each other which have no height or width. These two attributes have to be calculated and that’s where the scaling functions come handy again.

See, I add the coordinates of the rectangles with the attr call. The second parameter can be a callback which takes 3 parameters: the actual member of the input data, index of it and the whole input.

.attr(’x’, (actual, index, array) =>
    xScale(actual.value))

The scaling function returns the coordinate for a given domain value. Calculating the coordinates are a piece of cake, the trick is with the height of the bar. The computed y coordinate has to be subtracted from the height of the chart to get the correct representation of the value as a column.

I define the width of the rectangles with the scaling function as well. scaleBand has a bandwidth function which returns the computed width for one element based on the set padding.

d3-js-tutorial-bar-chart-drawn-out-with-javascript

Nice job, but not so fancy, right?

To prevent our audience from eye bleeding, let’s add some info and improve the visuals! 😉

Tips on making javascript bar charts

There are some ground rules with bar charts that worth mentioning.

  • Avoid using 3D effects;
  • Order data points intuitively – alphabetically or sorted;
  • Keep distance between the bands;
  • Start y-axis at 0 and not with the lowest value;
  • Use consistent colors;
  • Add axis labels, title, source line.

D3.js Grid System

I want to highlight the values by adding grid lines in the background.

Go ahead, experiment with both vertical and horizontal lines but my advice is to display only one of them. Excessive lines can be distracting. This code snippet presents how to add both solutions.

chart.append('g')
    .attr('class', 'grid')
    .attr('transform', `translate(0, ${height})`)
    .call(d3.axisBottom()
        .scale(xScale)
        .tickSize(-height, 0, 0)
        .tickFormat(''))

chart.append('g')
    .attr('class', 'grid')
    .call(d3.axisLeft()
        .scale(yScale)
        .tickSize(-width, 0, 0)
        .tickFormat(''))
d3-js-bar-chart-grid-system-added-with-javascript

I prefer the vertical grid lines in this case because they lead the eyes and keep the overall picture plain and simple.

Labels in D3.js

I also want to make the diagram more comprehensive by adding some textual guidance. Let’s give a name to the chart and add labels for the axes.

d3-js-tutorial-adding-labels-to-bar-chart-javascript

Texts are SVG elements that can be appended to the SVG or groups. They can be positioned with x and y coordinates while text alignment is done with the text-anchor attribute. To add the label itself, just call text method on the text element.

svg.append('text')
    .attr('x', -(height / 2) - margin)
    .attr('y', margin / 2.4)
    .attr('transform', 'rotate(-90)')
    .attr('text-anchor', 'middle')
    .text('Love meter (%)')

svg.append('text')
    .attr('x', width / 2 + margin)
    .attr('y', 40)
    .attr('text-anchor', 'middle')
    .text('Most loved programming languages in 2018')

Interactivity with Javascript and D3

We got quite an informative chart but still, there are possibilities to transform it into an interactive bar chart!

In the next code block I show You how to add event listeners to SVG elements.

svgElement
    .on('mouseenter', function (actual, i) {
        d3.select(this).attr(‘opacity’, 0.5)
    })
    .on('mouseleave’, function (actual, i) {
        d3.select(this).attr(‘opacity’, 1)
    })

Note that I use function expression instead of an arrow function because I access the element via this keyword.

I set the opacity of the selected SVG element to half of the original value on mouse hover and reset it when the cursor leaves the area.

You could also get the mouse coordinates with d3.mouse. It returns an array with the x and y coordinate. This way, displaying a tooltip at the tip of the cursor would be no problem at all.

Creating eye-popping diagrams is not an easy art form.

One might require the wisdom of graphic designers, UX researchers and other mighty creatures. In the following example I’m going to show a few possibilities to boost Your chart!

I have very similar values displayed on the chart so to highlight the divergences among the bar values, I set up an event listener for the mouseenter event. Every time the user hovers over a specific a column, a horizontal line is drawn on top of that bar. Furthermore, I also calculate the differences compared to the other bands and display it on the bars.

d3-js-tutorial-adding-interactivity-to-bar-chart

Pretty neat, huh? I also added the opacity example to this one and increased the width of the bar.

.on(‘mouseenter’, function (s, i) {
    d3.select(this)
        .transition()
        .duration(300)
        .attr('opacity', 0.6)
        .attr('x', (a) => xScale(a.language) - 5)
        .attr('width', xScale.bandwidth() + 10)

    chart.append('line')
        .attr('x1', 0)
        .attr('y1', y)
        .attr('x2', width)
        .attr('y2', y)
        .attr('stroke', 'red')

    // this is only part of the implementation, check the source code
})

The transition method indicates that I want to animate changes to the DOM. Its interval is set with the duration function that takes milliseconds as arguments. This transition above fades the band color and broaden the width of the bar.

To draw an SVG line, I need a start and a destination point. This can be set via the x1y1 and x2y2 coordinates. The line will not be visible until I set the color of it with the stroke attribute.

I only revealed part of the mouseenter event here so keep in mind, You have to revert or remove the changes on the mouseout event. The full source code is available at the end of the article.

Let’s Add Some Style to the Chart!

Let’s see what we achieved so far and how can we shake up this chart with some style. You can add class attributes to SVG elements with the same attr function we used before.

The diagram has a nice set of functionality. Instead of a dull, static picture, it also reveals the divergences among the represented values on mouse hover. The title puts the chart into context and the labels help to identify the axes with the unit of measurement. I also add a new label to the bottom right corner to mark the input source.

The only thing left is to upgrade the colors and fonts!

Charts with dark background makes the bright colored bars look cool. I also applied the Open Sans font family to all the texts and set size and weight for the different labels.

Do You notice the line got dashed? It can be done by setting the stroke-width and stroke-dasharray attributes. With stroke-dasharray, You can define pattern of dashes and gaps that alter the outline of the shape.

line#limit {
    stroke: #FED966;
    stroke-width: 3;
    stroke-dasharray: 3 6;
}

.grid path {
    stroke-width: 3;
}

.grid .tick line {
    stroke: #9FAAAE;
    stroke-opacity: 0.2;
}

Grid lines where it gets tricky. I have to apply stroke-width: 0 to path elements in the group to hide the frame of the diagram and I also reduce their visibility by setting the opacity of the lines.

All the other css rules cover the font sizes and colors which You can find in the source code.

Wrapping up our D3.js Bar Chart Tutorial

D3.js is an amazing library for DOM manipulation and for building javascript graphs and line charts. The depth of it hides countless hidden (actually not hidden, it is really well documented) treasures that waits for discovery. This writing covers only fragments of its toolset that help to create a not so mediocre bar chart.

Go on, explore it, use it and create spectacular JavaScript graphs & visualizations!

By the way, here’s the link to the source code.

Have You created something cool with D3.js? Share with us! Drop a comment if You have any questions or would like another JavaScript chart tutorial!

Thanks for reading and see You next time when I’m building a calendar heatmap with d3.js!

Puppeteer HTML to PDF Generation with Node.js

In this article I’m going to show how you can generate a Puppeteer PDF document from a heavily styled React web page using Node.js, headless Chrome & Docker.

Background: A few months ago one of the clients of RisingStack asked us to develop a feature where the user would be able to request a React page in PDF format. That page is basically a report/result for patients with data visualization, containing a lot of SVGs. Furthermore, there were some special requests to manipulate the layout and make some rearrangements of the HTML elements. So the PDF should have different styling and additions compared to the original React page.

puppeteer_pdf

As the assignment was a bit more complex than what could have been solved with simple CSS rules, we first explored possible implementations. Essentially we found 3 main solutions. This blogpost will walk you through on these possibilities and the final implementations.

A personal comment before we get started: it’s quite a hassle, so buckle up!

Table of Contents:

Client side or Server side PDF generation?

It is possible to generate a PDF file both on the client-side and on the server-side. However, it probably makes more sense to let the backend handle it, as you don’t want to use up all the resources the user’s browser can offer.

Even so, I’ll still show solutions for both methods.

Option 1: Make a Screenshot from the DOM

At first sight, this solution seemed to be the simplest, and it turned out to be true, but it has its own limitations. If you don’t have special needs, like selectable or searchable text in the PDF, it is a good and simple way to generate one.

This method is plain and simple: create a screenshot from the page, and put it in a PDF file. Pretty straightforward. We used two packages for this approach:

Html2canvas, to make a screenshot from the DOM
jsPdf, a library to generate PDF

Let’s start coding.

npm install html2canvas jspdf

import html2canvas from 'html2canvas'
import jsPdf from 'jspdf'
 
function printPDF () {
    const domElement = document.getElementById('your-id')
    html2canvas(domElement, { onclone: (document) => {
      document.getElementById('print-button').style.visibility = 'hidden'
    }})
    .then((canvas) => {
        const img = canvas.toDataURL('image/png')
        const pdf = new jsPdf()
        pdf.addImage(imgData, 'JPEG', 0, 0, width, height)
        pdf.save('your-filename.pdf')
})

And that’s it!

Make sure you take a look at the html2canvas onclone method. It can prove to be handy when you quickly need to take a snapshot and manipulate the DOM (e.g. hide the print button) before taking the picture. I can see quite a lot of use cases for this package. Unfortunately, ours wasn’t one, as we needed to handle the PDF creation on the backend side.

Option 2: Use only a PDF Library

There are several libraries out there on NPM for this purpose, like jsPDF (mentioned above) or PDFKit. The problem with them that I would have to recreate the page structure again if I wanted to use these libraries. That definitely hurts maintainability, as I would have needed to apply all subsequent changes to both the PDF template and the React page.

Take a look at the code below. You need to create the PDF document yourself by hand. Now you could traverse the DOM and figure out how to translate each element to PDF ones, but that is a tedious job. There must be an easier way.

doc = new PDFDocument
doc.pipe fs.createWriteStream('output.pdf')
doc.font('fonts/PalatinoBold.ttf')
   .fontSize(25)
   .text('Some text with an embedded font!', 100, 100)
 
doc.image('path/to/image.png', {
   fit: [250, 300],
   align: 'center',
   valign: 'center'
});
 
doc.addPage()
   .fontSize(25)
   .text('Here is some vector graphics...', 100, 100)
 
doc.end()

This snippet is from the PDFKit docs. However, it can be useful if your target is a PDF file straight away and not the conversion of an already existing (and ever-changing) HTML page.

Final Option 3: Puppeteer, Headless Chrome with Node.js

What is Puppeteer? The documentation says:

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

It’s basically a browser which you can run from Node.js. If you read the docs, the first thing it says about Puppeteer is that you can use it to Generate screenshots and PDFs of pages’. Excellent! That’s what we were looking for.

Let’s install Puppeteer with npmi i puppeteer, and implement our use case.

const puppeteer = require('puppeteer')
 
async function printPDF() {
  const browser = await puppeteer.launch({ headless: true });
  const page = await browser.newPage();
  await page.goto('https://blog.risingstack.com', {waitUntil: 'networkidle0'});
  const pdf = await page.pdf({ format: 'A4' });
 
  await browser.close();
  return pdf
})

This is a simple function that navigates to a URL and generates a PDF file of the site.

First, we launch the browser (PDF generation only supported in headless browser mode), then we open a new page, set the viewport size, and navigate to the provided URL.

Setting the waitUntil: ‘networkidle0’ option means that Puppeteer considers navigation to be finished when there are no network connections for at least 500 ms. (Check API docs for further information.)

After that, we save the PDF to a variable, we close the browser and return the PDF.

Note: The page.pdfmethod receives an options object, where you can save the file to disk with the ‘path’ option as well. If path is not provided, the PDF won’t be saved to the disk, you’ll get a buffer instead. Later on, I discuss how you can handle it.)

In case you need to log in first to generate a PDF from a protected page, first you need to navigate to the login page, inspect the form elements for ID or name, fill them in, then submit the form:

await page.type('#email', process.env.PDF_USER)
await page.type('#password', process.env.PDF_PASSWORD)
await page.click('#submit')

Always store login credentials in environment variables, do not hardcode them!

Style Manipulation

Puppeteer has a solution for this style manipulation too. You can insert style tags before generating the PDF, and Puppeteer will generate a file with the modified styles.

await page.addStyleTag({ content: '.nav { display: none} .navbar { border: 0px} #print-button {display: none}' })

Send file to the client and save it

Okay, now you have generated a PDF file on the backend. What to do now?

As I mentioned above, if you don’t save the file to disk, you’ll get a buffer. You just need to send that buffer with the proper content type to the front-end.

printPDF().then(pdf => {
	res.set({ 'Content-Type': 'application/pdf', 'Content-Length': pdf.length })
	res.send(pdf)
})

Now you can simply send a request to the server, to get the generated PDF.

function getPDF() {
 return axios.get(`${API_URL}/your-pdf-endpoint`, {
   responseType: 'arraybuffer',
   headers: {
     'Accept': 'application/pdf'
   }
 })

Once you’ve sent the request, the buffer should start downloading. Now the last step is to convert the buffer into a PDF file.

savePDF = () => {
    this.openModal(‘Loading…’) // open modal
   return getPDF() // API call
     .then((response) => {
       const blob = new Blob([response.data], {type: 'application/pdf'})
       const link = document.createElement('a')
       link.href = window.URL.createObjectURL(blob)
       link.download = `your-file-name.pdf`
       link.click()
       this.closeModal() // close modal
     })
   .catch(err => /** error handling **/)
 }
<button onClick={this.savePDF}>Save as PDF</button>

That was it! If you click on the save button, the PDF will be saved by the browser.

Using Puppeteer with Docker

I think this is the trickiest part of the implementation – so let me save you a couple of hours of Googling.

The official documentation states that “getting headless Chrome up and running in Docker can be tricky”. The official docs have a Troubleshooting section, where at the time of writing you can find all the necessary information on installing puppeteer with Docker.

If you install Puppeteer on the Alpine image, make sure you scroll down a bit to this part of the page. Otherwise, you might gloss over the fact that you cannot run the latest Puppeteer version and you also need to disable shm usage, using a flag:

const browser = await puppeteer.launch({
  headless: true,
  args: ['--disable-dev-shm-usage']
});

Otherwise, the Puppeteer sub process might run out of memory before it even gets started properly. More info about that on the troubleshooting link above.

Option 3 + 1: CSS Print Rules

One might think that simply using CSS print rules is easy from a developers standpoint. No NPM or node modules, just pure CSS. But how do they fare when it comes to cross-browser compatibility?

When choosing CSS print rules, you have to test the outcome in every browser to make sure it provides the same layout, and it’s not 100% that it does.

For example, inserting a break after a given element cannot be considered an esoteric use case, yet you might be surprised that you need to use workarounds to get that working in Firefox.

Unless you are a battle-hardened CSS magician with a lot of experience in creating printable pages, this can be time-consuming.

Print rules are great if you can keep the print stylesheets simple.

Let’s see an example.

@media print {
    .print-button {
        display: none;
    }
    
    .content div {
        break-after: always;
    }
}

This CSS above hides the print button, and inserts a page break after every div with the class content. There is a great article that summarizes what you can do with print rules, and what are the difficulties with them including browser compatibility.

Taking everything into account, CSS print rules are great and effective if you want to make a PDF from a not so complex page.

Summary: Puppeteer PDF from HTML with Node.js

So let’s quickly go through the options we covered here for generating PDF files from HTML pages:

  • Screenshot from the DOM: This can be useful when you need to create snapshots from a page (for example to create a thumbnail), but falls short when you have a lot of data to handle.
  • Use only a PDF library: If you need to create PDF files programmatically from scratch, this is a perfect solution. Otherwise, you need to maintain the HTML and PDF templates which is definitely a no-go.
  • Puppeteer: Despite being relatively difficult to get it working on Docker, it provided the best result for our use case, and it was also the easiest to write the code with.
  • CSS print rules: If your users are educated enough to know how to print to a file and your pages are relatively simple, it can be the most painless solution. As you saw in our case, it wasn’t.

Make sure to reach out to RisingStack when you need help with Node, React, or just JS in general.

Have fun with your PDF HTML’s!

Async Await in Node.js – How to Master it?

In this article, you will learn how you can simplify your callback or Promise based Node.js application with async functions (async await).

Whether you’ve looked at async/await and promises in JavaScript before, but haven’t quite mastered them yet, or just need a refresher, this article aims to help you.

async await nodejs explained

What are async functions in Node.js?

Async functions are available natively in Node and are denoted by the async keyword in their declaration. They always return a promise, even if you don’t explicitly write them to do so. Also, the await keyword is only available inside async functions at the moment – it cannot be used in the global scope.

In an async function, you can await any Promise or catch its rejection cause.

So if you had some logic implemented with promises:

function handler (req, res) {
  return request('https://user-handler-service')
    .catch((err) => {
      logger.error('Http error', err);
      error.logged = true;
      throw err;
    })
    .then((response) => Mongo.findOne({ user: response.body.user }))
    .catch((err) => {
      !error.logged && logger.error('Mongo error', err);
      error.logged = true;
      throw err;
    })
    .then((document) => executeLogic(req, res, document))
    .catch((err) => {
      !error.logged && console.error(err);
      res.status(500).send();
    });
}

You can make it look like synchronous code using async/await:

async function handler (req, res) {
  let response;
  try {
    response = await request('https://user-handler-service')  ;
  } catch (err) {
    logger.error('Http error', err);
    return res.status(500).send();
  }

  let document;
  try {
    document = await Mongo.findOne({ user: response.body.user });
  } catch (err) {
    logger.error('Mongo error', err);
    return res.status(500).send();
  }

  executeLogic(document, req, res);
}

Currently in Node you get a warning about unhandled promise rejections, so you don’t necessarily need to bother with creating a listener. However, it is recommended to crash your app in this case as when you don’t handle an error, your app is in an unknown state. This can be done either by using the --unhandled-rejections=strict CLI flag, or by implementing something like this:

process.on('unhandledRejection', (err) => { 
  console.error(err);
  process.exit(1);
})

Automatic process exit will be added in a future Node release – preparing your code ahead of time for this is not a lot of effort, but will mean that you don’t have to worry about it when you next wish to update versions.

Patterns with async functions in JavaScript

There are quite a couple of use cases when the ability to handle asynchronous operations as if they were synchronous comes very handy, as solving them with Promises or callbacks requires the use of complex patterns.

Since node@10.0.0, there is support for async iterators and the related for-await-of loop. These come in handy when the actual values we iterate over, and the end state of the iteration, are not known by the time the iterator method returns – mostly when working with streams. Aside from streams, there are not a lot of constructs that have the async iterator implemented natively, so we’ll cover them in another post.

Retry with exponential backoff

Implementing retry logic was pretty clumsy with Promises:

function request(url) {
  return new Promise((resolve, reject) => {
    setTimeout(() => {
      reject(`Network error when trying to reach ${url}`);
    }, 500);
  });
}

function requestWithRetry(url, retryCount, currentTries = 1) {
  return new Promise((resolve, reject) => {
    if (currentTries <= retryCount) {
      const timeout = (Math.pow(2, currentTries) - 1) * 100;
      request(url)
        .then(resolve)
        .catch((error) => {
          setTimeout(() => {
            console.log('Error: ', error);
            console.log(`Waiting ${timeout} ms`);
            requestWithRetry(url, retryCount, currentTries + 1);
          }, timeout);
        });
    } else {
      console.log('No retries left, giving up.');
      reject('No retries left, giving up.');
    }
  });
}

requestWithRetry('http://localhost:3000')
  .then((res) => {
    console.log(res)
  })
  .catch(err => {
    console.error(err)
  });

This would get the job done, but we can rewrite it with async/await and make it a lot more simple.

function wait (timeout) {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve()
    }, timeout);
  });
}

async function requestWithRetry (url) {
  const MAX_RETRIES = 10;
  for (let i = 0; i <= MAX_RETRIES; i++) {
    try {
      return await request(url);
    } catch (err) {
      const timeout = Math.pow(2, i);
      console.log('Waiting', timeout, 'ms');
      await wait(timeout);
      console.log('Retrying', err.message, i);
    }
  }
}

A lot more pleasing to the eye isn’t it?

Intermediate values

Not as hideous as the previous example, but if you have a case where 3 asynchronous functions depend on each other the following way, then you have to choose from several ugly solutions.

functionA returns a Promise, then functionB needs that value and functionC needs the resolved value of both functionA‘s and functionB‘s Promise.

Solution 1: The .then Christmas tree

function executeAsyncTask () {
  return functionA()
    .then((valueA) => {
      return functionB(valueA)
        .then((valueB) => {          
          return functionC(valueA, valueB)
        })
    })
}

With this solution, we get valueA from the surrounding closure of the 3rd then and valueB as the value the previous Promise resolves to. We cannot flatten out the Christmas tree as we would lose the closure and valueA would be unavailable for functionC.

Solution 2: Moving to a higher scope

function executeAsyncTask () {
  let valueA
  return functionA()
    .then((v) => {
      valueA = v
      return functionB(valueA)
    })
    .then((valueB) => {
      return functionC(valueA, valueB)
    })
}

In the Christmas tree, we used a higher scope to make valueA available as well. This case works similarly, but now we created the variable valueA outside the scope of the .then-s, so we can assign the value of the first resolved Promise to it.

This one definitely works, flattens the .then chain and is semantically correct. However, it also opens up ways for new bugs in case the variable name valueA is used elsewhere in the function. We also need to use two names — valueA and v — for the same value.

Are you looking for help with enterprise-grade Node.js Development?
Hire the Node developers of RisingStack!

Solution 3: The unnecessary array

function executeAsyncTask () {
  return functionA()
    .then(valueA => {
      return Promise.all([valueA, functionB(valueA)])
    })
    .then(([valueA, valueB]) => {
      return functionC(valueA, valueB)
    })
}

There is no other reason for valueA to be passed on in an array together with the Promise functionB then to be able to flatten the tree. They might be of completely different types, so there is a high probability of them not belonging to an array at all.

Solution 4: Write a helper function

const converge = (...promises) => (...args) => {
  let [head, ...tail] = promises
  if (tail.length) {
    return head(...args)
      .then((value) => converge(...tail)(...args.concat([value])))
  } else {
    return head(...args)
  }
}

functionA(2)
  .then((valueA) => converge(functionB, functionC)(valueA))

You can, of course, write a helper function to hide away the context juggling, but it is quite difficult to read, and may not be straightforward to understand for those who are not well versed in functional magic.

By using async/await our problems are magically gone:

async function executeAsyncTask () {
  const valueA = await functionA();
  const valueB = await functionB(valueA);
  return function3(valueA, valueB);
}

Multiple parallel requests with async/await

This is similar to the previous one. In case you want to execute several asynchronous tasks at once and then use their values at different places, you can do it easily with async/await:

async function executeParallelAsyncTasks () {
  const [ valueA, valueB, valueC ] = await Promise.all([ functionA(), functionB(), functionC() ]);
  doSomethingWith(valueA);
  doSomethingElseWith(valueB);
  doAnotherThingWith(valueC);
}

As we’ve seen in the previous example, we would either need to move these values into a higher scope or create a non-semantic array to pass these values on.

Array iteration methods

You can use mapfilter and reduce with async functions, although they behave pretty unintuitively. Try guessing what the following scripts will print to the console:

  1. map
function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].map(async (value) => {
    const v = await asyncThing(value);
    return v * 2;
  });
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));
  1. filter
function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].filter(async (value) => {
    const v = await asyncThing(value);
    return v % 2 === 0;
  });
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));
  1. reduce

function asyncThing (value) {
  return new Promise((resolve) => {
    setTimeout(() => resolve(value), 100);
  });
}

async function main () {
  return [1,2,3,4].reduce(async (acc, value) => {
    return await acc + await asyncThing(value);
  }, Promise.resolve(0));
}

main()
  .then(v => console.log(v))
  .catch(err => console.error(err));

Solutions:

  1. [ Promise { <pending> }, Promise { <pending> }, Promise { <pending> }, Promise { <pending> } ]
  2. [ 1, 2, 3, 4 ]
  3. 10

If you log the returned values of the iteratee with map you will see the array we expect: [ 2, 4, 6, 8 ]. The only problem is that each value is wrapped in a Promise by the AsyncFunction.

So if you want to get your values, you’ll need to unwrap them by passing the returned array to a Promise.all:

main()
  .then(v => Promise.all(v))
  .then(v => console.log(v))
  .catch(err => console.error(err));

Originally, you would first wait for all your promises to resolve and then map over the values:

function main () {
  return Promise.all([1,2,3,4].map((value) => asyncThing(value)));
}

main()
  .then(values => values.map((value) => value * 2))
  .then(v => console.log(v))
  .catch(err => console.error(err));

This seems a bit more simple, doesn’t it?

The async/await version can still be useful if you have some long running synchronous logic in your iteratee and another long-running async task.

This way you can start calculating as soon as you have the first value – you don’t have to wait for all the Promises to be resolved to run your computations. Even though the results will still be wrapped in Promises, those are resolved a lot faster then if you did it the sequential way.

What about filter? Something is clearly wrong…

Well, you guessed it: even though the returned values are [ false, true, false, true ], they will be wrapped in promises, which are truthy, so you’ll get back all the values from the original array. Unfortunately, all you can do to fix this is to resolve all the values and then filter them.

Reducing is pretty straightforward. Bear in mind though that you need to wrap the initial value into Promise.resolve, as the returned accumulator will be wrapped as well and has to be await-ed.

.. As it is pretty clearly intended to be used for imperative code styles.

To make your .then chains more “pure” looking, you can use Ramda’s pipeP and composeP functions.

Rewriting callback-based Node.js applications

Async functions return a Promise by default, so you can rewrite any callback based function to use Promises, then await their resolution. You can use the util.promisify function in Node.js to turn callback-based functions to return a Promise-based ones.

Rewriting Promise-based applications

Simple .then chains can be upgraded in a pretty straightforward way, so you can move to using async/await right away.

function asyncTask () {
  return functionA()
    .then((valueA) => functionB(valueA))
    .then((valueB) => functionC(valueB))
    .then((valueC) => functionD(valueC))
    .catch((err) => logger.error(err))
}
 

will turn into

async function asyncTask () {
  try {
    const valueA = await functionA();
    const valueB = await functionB(valueA);
    const valueC = await functionC(valueB);
    return await functionD(valueC);
  } catch (err) {
    logger.error(err);
  }
}

Rewriting Node.js apps with async await

  • If you liked the good old concepts of if-else conditionals and for/while loops,
  • if you believe that a try-catch block is the way errors are meant to be handled,

you will have a great time rewriting your services using async/await.

As we have seen, it can make several patterns a lot easier to code and read, so it is definitely more suitable in several cases than Promise.then() chains. However, if you are caught up in the functional programming craze of the past years, you might wanna pass on this language feature.

Are you already using async/await in production, or you plan on never touching it? Let’s discuss it in the comments below.

Are you looking for help with enterprise-grade Node.js Development?
Hire the Node developers of RisingStack!

Handling runtime environment variables in create-react-apps

Have you ever run into a problem in production/staging, when you just wanted to change the API URL in your React app in a quick and easy way?

Usually, to change the API URL you need to rebuild your application and redeploy it. If it’s in a Docker container, you need to rebuild the whole image again to fix the issue, which can cause downtime. If it’s behind a CDN, you need to clear the cache too. Also, in most cases, you need to make/maintain two different builds for staging and production just because you are using different API URLs.

Of course, there have been solutions to solve these kinds of problems, but I found neither of them was self-explanatory and required some time to understand.

The resources out there are confusing, there are quite a lot of them and none was a package I could install and use easily. Many of them are Node.js servers which our client will query at the start on a specific URL (/config for example), require hard-coding the API URLs and changing them based on NODE_ENV, bash script injection (but that’s not cool with someone developing on Windows without WSL), etc.

I wanted something that works well on any OS and also works the same in production.

We have come up with our solutions over the years here at RisingStack, but everyone had a different opinion about what is the best way to handle runtime environment variables in client apps. So I decided to give it a try with a package to unify this problem (at least for me:)).

I believe that my new package, runtime-env-cra solves this problem in a quick and easy way. You won’t need to build different images anymore, because you want to change only an environment variable.

Cool, how should I use or migrate to runtime-env-cra?

Let’s say you have a .env file in your root already with the following environment variables.

NODE_ENV=production
REACT_APP_API_URL=https://api.my-awesome-website.com
REACT_APP_MAIN_STYLE=dark
REACT_APP_WEBSITE_NAME=My awesome website
REACT_APP_DOMAIN=https://my-awesome-website.com

You are using these environment variables in your code as process.env.REACT_APP_API_URL now.

Let’s configure the runtime-env-cra package, and see how our env usage will change in the code!

$ npm install runtime-env-cra

Modify your start script to the following in your package.json:

...
"scripts": {
"start": "NODE_ENV=development runtime-env-cra --config-name=./public/runtime-env.js && react-scripts start",
...
}
...

You can see the --config-name parameter for the script, which we use to describe where our config file should be after the start.

NOTE: You can change the name and location with the --config-name flag. If you want a different file name, feel free to change it, but in this article and examples, I’m going to use runtime-env.js. The config file in the provided folder will be injected during webpack build time.

If you are using another name than .env for your environment variables file, you can also provide that with the --env-file flag. By default --env-file flag uses ./.env.

Add the following to public/index.html inside the <head> tag:

<!-- Runtime environment variables -->
<script src="%PUBLIC_URL%/runtime-env.js"></script>

This runtime-env.js will look like this:

window.__RUNTIME_CONFIG__ = {"NODE_ENV":"development","API_URL":"https://my-awesome-api.com"};

During local development, we want to always use the .env file (or the one you provided with the --env-file flag), so that’s why you need to provide NODE_ENV=development to the script.

If it gets development, it means you want to use the content of your .env. If you provide anything else than development or nothing for NODE_ENV, it will parse the variables from your session.

And as the last step, replace process.env to window.__RUNTIME_CONFIG__ in our application, and we are good to go!

What if I’m using TypeScript?

If you are using TypeScript, you must be wondering how it will auto-complete for you? All you need to do is to create src/types/globals.ts file, with the following (modify the __RUNTIME_CONFIG__ properties to match your environment):

export {};

declare global {
 interface Window {
   __RUNTIME_CONFIG__: {
     NODE_ENV: string;
     REACT_APP_API_URL: string;
     REACT_APP_MAIN_STYLE: string;
     REACT_APP_WEBSITE_NAME: string;
     REACT_APP_DOMAIN: string;
   };
 }
}

Add "include": ["src/types"] to your tsconfig.json:

{
"compilerOptions": { ... },
"include": ["src/types"]
}

Now you have TypeScript support also. 🙂

What about Docker, and running in production?

Here’s an example of an alpine based Dockerfile with a multi-stage build, using just an Nginx to serve our client.

# -- BUILD --
FROM node:12.13.0-alpine as build

WORKDIR /usr/src/app

COPY package* ./
COPY . .

RUN npm install
RUN npm run build

# -- RELEASE --
FROM nginx:stable-alpine as release

COPY --from=build /usr/src/app/build /usr/share/nginx/html
# copy .env.example as .env to the release build
COPY --from=build /usr/src/app/.env.example /usr/share/nginx/html/.env
COPY --from=build /usr/src/app/nginx/default.conf /etc/nginx/conf.d/default.conf

RUN apk add --update nodejs
RUN apk add --update npm
RUN npm install -g runtime-env-cra@0.2.2

WORKDIR /usr/share/nginx/html

EXPOSE 80

CMD ["/bin/sh", "-c", "runtime-env-cra && nginx -g \"daemon off;\""]

The key point here is to have a .env.example in your project, which represents your environment variable layout. The script will know what variable it will need to parse from the system. Inside the container, we can lean on that .env as a reference point.

Make sure you start the app with runtime-env-cra && nginx in the CMD section, this way the script can always parse the newly-added/modified environment variables to your container.

Examples

Here you can find more detailed and working examples on this topic (docker + docker-compose):

Link for the package on npm and Github:

Hope you will find it useful!