Tutorial: Building ExpressJS-based microservices using Hydra

RisingStack's services:

Node.js Experts

Learn more at risingstack.com

Sign up to our newsletter!

In this article:

This microservices tutorial describes how to use a new Node module called Hydra to build more capable ExpressJS microservices.

Before delving deeper, you should already know what microservices are and have a rough idea of how you might build them using ExpressJS. If not, there are a ton of great posts to help guide you – but sadly, this isn’t one of them.

We’ll examine how to build better ExpressJS microservices. But why? After all, ExpressJS already allows us to build microservices.

In fact, we can build three flavors of microservices. We can use HTTP APIs, WebSocket messages and even use messaging services such as RabbitMQ, MQTT, and others. In doing so, we just need to keep a core microservice goal in mind. Namely, our microservices need to remain focused on providing a single service. Unlike monolithic services which end up providing many services.

While we can do all of that using ExpressJS and perhaps a few select NPM packages, I’m going to show you how a single package, called Hydra, can supercharge your ExpressJS microservice efforts.

UPDATEI wrote another article on using Hydra to build a microservices game as an example application. I recommend to check it out that one too!

API servers vs. Microservices

If you’ve used ExpressJS for some time, then you’ve undoubtedly built an Express server which hosts API endpoints. Such a task is considered a node developers rite of passage. And if your server’s API’s are specific, such as authorization using JSON web tokens or perhaps user profile management or image resizing – then you might even be able to call your server a microservice.

However, the microservices we’ll consider in this post will do more than implement API endpoints. They’ll also feature:

  • Service Discovery
  • Inter-service messaging
  • Request load balancing
  • Service Presence and health

The main takeaway here is that basic API servers are not automatically microservices. In the real world, the act of responding to a service request will likely involve more than simply returning a result. It may, for example, require services to speak with other services.

In this post, we’ll look at simple ExpressJS app performing some pretty cool microservice feats.

UPDATE: After this post was first published we quickly learned that we should have made this important disclaimer. While Hydra offers you lots of microservice tools – you don’t have to use them!

You’re free to use the features you need and ignore the ones you don’t need. As your applications and deployments evolve you can replace selective Hydra features for other features.

In this post we’re not advicating that Hydra is a one stop solution for every microservice. That would be silly! Only that using Hydra allows you to quickly and easily build microservices. A key benifit of the microservice architecture pattern is that you can iterate on services as they need to be improved.

We believe Hydra helps to get you there, and given your scaling needs you may find that Hydra is really all you need.

Enter Hydra

Much of what we considered thus far can still be accomplished using ExpressJS and the NPM modules of your choosing. However, your options will vary in complexity and infrastructure requirements.

As there is no guarantee that the modules you choose are designed to work seamlessly with one another, you’ll likely end up adding your own code to glue it together.

We’re going to focus on a less tedious approach, one that uses a new NPM package called Hydra. Hydra is designed to greatly simplify microservice concerns. We built Hydra at Flywheel Sports and open sourced it at the 2016 EmpireNode conference in New York City.

Another NPM package called Hydra-express uses Hydra (core) to create an easy to use binding for ExpressJS. And that’s what we’ll focus on in this post.

Here’s a list of Hydra features which are available via Hydra-Express:

  • Automated health and presence
  • Service discovery
  • Inter-service communication with support for HTTP RESTful API and WebSocket messaging
  • Self-registration with near zero configuration
  • Built-in job queues

UPDATE: Full documentation can be found here.

Prerequisites

You’ll need Node version 6.2.1 or greater to use Hydra. One reason is that Hydra is built using ES6.

You’ll also need access to an instance of Redis. Redis is the only external dependency Hydra has, and it uses it as an in-memory database and messaging server. If you’re unfamiliar with Redis or need to learn how to install it, see our short Getting Redis intro video.

The Hydra project also has some tools which will help you build and test microservices. Fortunately, those tools are just an npm install away. We’ll get to one of those shortly.

Adding Hydra to an ExpressJS app

Let’s begin at ground zero by considering a basic ExpressJS app and then comparing it against a hydra-express one.

If you would like to follow along you can create a folder called hydra-test and copy the following basic ExpressJS app into a file called index.js.

$ mkdir hydra-test; cd hydra-test;
$ vi index.js 
$ npm init -y
var express = require('express')
var app = express()

app.get('/', function (req, res) {
  res.send('Hello World!')
})

app.listen(3000, function () {
  console.log('Example app listening on port 3000!')
})

Lastly, let’s add ExpressJS as a dependency to our new project and run it.

$ npm install express --save
$ node index

After running the basic ExpressJS app and accessing it in our browser using http://localhost:3000 we see the hello reply.

Great, let’s compare this to a hydra-express app. The following code is only slightly larger. If you’re following along, just copy and paste this into your existing index.js file.

var hydraExpress = require('fwsp-hydra-express');
var config = require('./config.json');

function onRegisterRoutes() {
  var express = hydraExpress.getExpress();
  var api = express.Router();
  
  api.get('/', function(req, res) {
    res.send('Hello World!');
  });
  hydraExpress.registerRoutes({
    '': api
  });
}

hydraExpress.init(config, onRegisterRoutes);

At the top of the file, we require our hydra-express module. Then, we load a config.json file which contains some basic settings that Hydra needs, namely the location of our Redis instance and the name of our microservice. We’ll review that in just a bit.

Next, we create a callback function called onRegisterRoutes which gets a handle to ExpressJS and proceeds to create our API endpoint. We use the hydraExpress.registerRoutes call to register our endpoints. This extra step is required because HydraExpress routes are discoverable and routable.

The final thing we do here is to initialize HydraExpress using the config file we loaded and the routes callback we defined.

We’ll need to do two things before we can try this out. First, we install hydra-express and then we define a config.json file which will be loaded at runtime.

$ npm install fwsp-hydra-express --save

Here is the config file we’ll use:

{
  "hydra": {
    "serviceName": "hello",
    "serviceIP": "",
    "servicePort": 3000,
    "serviceType": "",
    "serviceDescription": "",
    "redis": {
      "url": "127.0.0.1",
      "port": 6379,
      "db": 15
    }
  }
}

UPDATE: The hydra.redis.url above should read 127.0.0.1

Upon closer examination, we can see that the config file consists of a single root branch called hydra which contains fields for service identification. Noticeably missing are entries for the serviceIPserviceType and serviceDescription. These fields are optional, setting serviceIP to an empty string tells Hydra to use the existing IP address of the machine it’s running on. You can also specify a value of zero with the servicePort field. That will cause Hydra to choose a random port address above 1024. We’ll actually do that later in this post.

The config file also contains a branch called redis to specify the location of our Redis instance. Here we assume Redis is running locally. We also specify a db field containing the value 15. That’s the Redis database which will be used. It’s important that all instances of your microservices use the same db number in order to access presence information and messages.

We’re now ready to try this. Save the above config file as config.json, then start the project.

$ node index.js

If you tried this, then you were understandably unimpressed. The results are exactly the same as our basic ExpressJS app. No worries! I’ll work a bit harder to impress you.

In truth, there’s a lot more going on than appears. To see this more clearly let’s install a Hydra tool called the hydra-cli. The hydra-cli is a command line interface which allows us to interact with Hydra-enabled applications, such as the one we just created.

Hydra cli

Let’s install hydra-cli.

$ sudo npm install -g hydra-cli 

Type hydra-cli to see what it offers.

$ hydra-cli
hydra-cli version 0.4.2
Usage: hydra-cli command [parameters]
See docs at: https://github.com/flywheelsports/hydra-cli

A command line interface for Hydra services

Commands:

help                         - this help list
config                       - configure connection to redis
config list                  - display current configuration
health [serviceName]         - display service health
healthlog [serviceName]      - display service health log
message create               - create a message object
message send message.json    - send a message
nodes [serviceName]          - display service instance nodes
rest path [payload.json]     - make an HTTP RESTful call to a service
routes [serviceName]         - display service API routes
services [serviceName]       - display list of services
  

We’ll try a few of those options, but first, we need to configure hydra-cli before we can use it.

$ hydra-cli config
redisUrl: 127.0.0.1
redisPort: 6379
redisDb: 15

Here we just provide the location of our Redis instance, which in our case is running locally.

With our hydra-express app still running in a terminal shell, we can open another shell and type hydra-cli nodes to view a list of running microservices.

$ hydra-cli nodes
[
  {
    "serviceName": "hello",
    "serviceDescription": "not specified",
    "version": "0.12.4",
    "instanceID": "2c87057963121e1d7983bc952951ff3f",
    "updatedOn": "2016-12-29T17:21:35.100Z",
    "processID": 74222,
    "ip": "192.168.1.186",
    "port": 3000,
    "elapsed": 0
  }
]

Here we see that we have a service named hello running and that it has an instanceID assigned. We also see the ip and port number it is listening on.

This information is being emitted by our running service. Hydra-cli is merely displaying this information from Redis and not actually speaking with our service. At least not yet!

We can also see the health and presence information that our hello service is emitting, using the hydra-cli health command.

$ hydra-cli health hello
[
  [
    {
      "updatedOn": "2016-12-29T17:35:46.032Z",
      "serviceName": "hello",
      "instanceID": "2c87057963121e1d7983bc952951ff3f",
      "sampledOn": "2016-12-29T17:35:46.033Z",
      "processID": 74222,
      "architecture": "x64",
      "platform": "darwin",
      "nodeVersion": "v6.9.2",
      "memory": {
        "rss": 47730688,
        "heapTotal": 26251264,
        "heapUsed": 21280416
      },
      "uptime": "16 minutes, 6.429 seconds",
      "uptimeSeconds": 966.429,
      "usedDiskSpace": "63%"
    }
  ]
]

Lot’s of useful information there. How about seeing what routes are exposed? Try hydra-cli routes.

$ hydra-cli routes
{
  "hello": [
    "[get]/",
    "[GET]/_config/hello"
  ]
}

Here we see two routes. The second route allows us to access the configuration information for a service. If you’re interested, you can access that route in your web browser at: http://localhost:3000/_config/hello

We can also invoke a service route via the hydra-cli rest command.

$ hydra-cli rest hello:[get]/
{
  "headers": {
    "access-control-allow-origin": "*",
    "x-process-id": "74222",
    "x-dns-prefetch-control": "off",
    "x-frame-options": "SAMEORIGIN",
    "x-download-options": "noopen",
    "x-content-type-options": "nosniff",
    "x-xss-protection": "1; mode=block",
    "x-powered-by": "hello/0.12.4",
    "content-type": "text/html; charset=utf-8",
    "content-length": "12",
    "etag": "W/\"c-7Qdih1MuhjZehB6Sv8UNjA\"",
    "x-response-time": "18.029ms",
    "date": "Thu, 29 Dec 2016 17:42:49 GMT",
    "connection": "close"
  },
  "body": "Hello World!",
  "statusCode": 200
}

The hydra-cli rest-ful command allows us to specify a service name and route path. This is useful when testing our service endpoints. You’ll notice that the route path has a specific format. The first part of the route is the service name, which is separated by a colon character, followed by an HTTP method type enclosed in square brackets. Lastly, the route path is appended.

Service nameColonHTTP MethodAPI Route
Hello:[get]/

This format is how we specify routing in Hydra. You may have noticed that in our example above we didn’t specify the IP or port address for our hello service. Yet, hydra-cli was able to locate it and call its default route. This works using hydra’s service discovery feature.

You might be wondering how hydra-cli actually works. There’s nothing special about hydra-cli, it’s just a command line client which uses hydra-core.

Two key points here is that hydra-cli isn’t a microservice, and Hydra is just a library for building distributed applications – and not just microservices. You could, for example, build a monitoring service which just looks and reports service health and presence information. We did and called it our Hydra wallboard monitor.

The same functionality available to hydra-cli via hydra-core is also available to our hydra-enabled hello microservice.

Let’s take a deeper dive.

Hydra deep dive

We’ll modify our basic hydra-enabled application to see what other hydra-express features we can take advantage of.

Take the following code and replace the contents in the index.js file. This version looks almost identical to our earlier version. The only real change is the use of the hydraExpress.getHydra call which returns a reference to the underlying hydra-core class. We use that to call two Hydra core methods getServiceName and getInstanceID. Lastly, we return an object with those fields when our default route is called.

var hydraExpress = require('fwsp-hydra-express');
var hydra = hydraExpress.getHydra();
var config = require('./config.json');

function onRegisterRoutes() {
  var express = hydraExpress.getExpress();
  var api = express.Router();
  
  api.get('/', function(req, res) {
    res.send({
      msg: `hello from ${hydra.getServiceName()} - ${hydra.getInstanceID()}`
    });
  });
  hydraExpress.registerRoutes({
    '': api
  });
}

hydraExpress.init(config, onRegisterRoutes);

Next, we restart our hello service in one shell and use hydra-cli in another shell to call it.

$ hydra-cli rest hello:[get]/
{
  "msg": "hello from hello - 2c87057963121e1d7983bc952951ff3f"
}

The service ID is a generated identifier assigned to each instance of a service. It’s useful for identification purposes and in a number of other situations such as message routing.

Now, what if we wanted to run multiple instances of our hello service? One immediate issue is that our service is using port 3000 and only a single process can bind to a port address at one time. Let’s change that.

Open the config.json file and change the servicePort address to zero.

{
  "hydra": {
    "serviceName": "hello",
    "serviceIP": "",
    "servicePort": 0,
    "serviceType": "",
    "serviceDescription": "",
    "redis": {
      "url": "172.16.0.2",
      "port": 6379,
      "db": 15
    }
  }
}

Now, restart the hello service. Notice that now it selects a random port. In the output below the selected port is 20233.

$ node index
INFO
{ event: 'info',
  message: 'Successfully reconnected to redis server' }
INFO
{ event: 'start',
  message: 'hello (v.0.12.4) server listening on port 20233' }
INFO
{ event: 'info', message: 'Using environment: development' }

We can confirm that by using Hydra-cli

$ hydra-cli nodes
[
  {
    "serviceName": "hello",
    "serviceDescription": "not specified",
    "version": "0.12.4",
    "instanceID": "b4c05d2e37c7b0aab98ba1c7fdc572d5",
    "updatedOn": "2016-12-29T19:43:22.673Z",
    "processID": 78792,
    "ip": "192.168.1.186",
    "port": 20233,
    "elapsed": 1
  }
]

Start another hello service in a new shell. Notice that it gets a different port address and that Hydra-cli now detects two hello services. Our new service instance gets assigned port numbered 30311.

$ hydra-cli nodes
[
  {
    "serviceName": "hello",
    "serviceDescription": "not specified",
    "version": "0.12.4",
    "instanceID": "445ef40d258b8b18ea0cc6bd7c2809f3",
    "updatedOn": "2016-12-29T19:46:59.819Z",
    "processID": 79186,
    "ip": "192.168.1.186",
    "port": 30311,
    "elapsed": 4
  },
  {
    "serviceName": "hello",
    "serviceDescription": "not specified",
    "version": "0.12.4",
    "instanceID": "3a18ce68a67bfdca75595024d3dc4998",
    "updatedOn": "2016-12-29T19:47:03.353Z",
    "processID": 79164,
    "ip": "192.168.1.186",
    "port": 20233,
    "elapsed": 0
  }
]

This is kind of cool. But what happens when we use hydra-cli to access our hello service? Which service instance gets called? And what if we want to access a specific service instance?

Let’s find out.

$ hydra-cli rest hello:[get]/
{
  "msg": "hello from hello - 445ef40d258b8b18ea0cc6bd7c2809f3"
}


$ hydra-cli rest hello:[get]/
{
  "msg": "hello from hello - 3a18ce68a67bfdca75595024d3dc4998"
}

Calling our service multiple times results in one of two service instances replying. What’s really happening here? Hydra is load balancing requests across multiple instances – without a dedicated load balancer.

If we prefer, we can call a specific service instance with a slight modification to the route path.

$ hydra-cli rest 445ef40d258b8b18ea0cc6bd7c2809f3@hello:[get]/
{
  "msg": "hello from hello - 445ef40d258b8b18ea0cc6bd7c2809f3"
}

$ hydra-cli rest 445ef40d258b8b18ea0cc6bd7c2809f3@hello:[get]/
{
  "msg": "hello from hello - 445ef40d258b8b18ea0cc6bd7c2809f3"
}

We simply prefix a route with the service instance ID of the service we would like to use and separate it with an @ symbol. So we’re saying send to: “serviceID at service using route”. Running the call a few times confirms that we’re only accessing a single instance.

Keep in mind that while we’re looking at pretty basic examples, these features are powerful when used with Hydra messaging. That works whether the transport is HTTP or WebSocket based.

So, in addition to routing – where we didn’t have to specify IP addresses or ports – Hydra also does automatic load balancing.

And there’s more. What if one of our hello services dies? That’s easy – let’s just stop one of them and call the hello service a few times.

$ hydra-cli rest hello:[get]/
{
  "msg": "hello from hello - 3a18ce68a67bfdca75595024d3dc4998"
}

$ hydra-cli rest hello:[get]/
{
  "msg": "hello from hello - 3a18ce68a67bfdca75595024d3dc4998"
}

We can see that calls don’t get routed to the dead service. Here we see Hydra’s presence management at work. Once a service is no longer present, calls simply don’t get routed to it. It also no longer appears on our list of service instances. We can confirm that by using hydra-cli nodes.

These features allow you to build microservices which can be started and stopped across machines on a common network. You don’t have to care where a service instance lives – and you can still route calls to an available instance. Also, you’ll notice that we didn’t have to write any code to handle presence management.

This underlying functionality has allowed us to build a tool called the hydra-router, a service-aware router and API gateway. This tool supports routing external requests and messages via RESTful HTTP or WebSockets.

UPDATE: We’ve been using Hydra-express locally, however if you’d like to run the examples in this post on different machines you only need a network accessible instance of Redis. Simply update the config.redis.url to point to a network path. Also, make sure to run hydra-cli config to update the location of Redis.

Messaging

In a large application, a set of microservices may need to call one another. Hydra facilitates this by using its underlying service discovery and routing features. Again, a key point here is that a service need not concern itself with the physical location of another service, nor do we need to build infrastructure to route and load balance requests. Rather, with Hydra, we’re looking at microservices which know how to communicate with one another in a highly scalable manner.

Let’s see this in action. We’ll build a new service, called friend. The friend service will send a message to our hello service on startup. Using the code below, create a file called friend.js

var hydraExpress = require('fwsp-hydra-express');
var hydra = hydraExpress.getHydra();
var config = require('./config.json');

config.hydra.serviceName = 'friend';

hydraExpress.init(config, () => {})
  .then((serviceInfo) => {
    console.log('serviceInfo', serviceInfo);
    let message = hydra.createUMFMessage({
      to: 'hello:[get]/',
      from: 'friend:/',
      body: {}
    });
    return hydra.makeAPIRequest(message)
      .then((response) => {
        console.log('response', response);
      });
  })
  .catch(err => console.log('err', err));

This code is quite similar to our index.js file but has a few important differences.

First, we overwrite the service name when we load the config.json. This is necessary since we’re sharing a single config.json file among two services for the sake of simplicity. Another difference is that we’re not registering any routes with the hydraExpress.init call. Instead, we’re using a little ES6 arrow function action to send an empty function.

We’ve also added a promise .then method to perform an action when the promise returned by hydraExpress.init resolves. This is handy since it allows us to perform actions once hydraExpress is fully initialized. In our case, we’re going to send a message to the hello service. Let’s take a closer look at that part of the code.

let message = hydra.createUMFMessage({
  to: 'hello:[get]/',
  from: 'friend:/',
  body: {}
});

Here we use a Hydra method called createUMFMessageUMF, is a simple JSON object format that Hydra uses to define routable messages. As you can see we’re simply passing in a JavaScript object containing three fields, a tofrom and a body field. There are additional UMF fields we could use, but those three are the only required ones.

The to field contains the familiar routing format we saw earlier. It consists of the service name, an HTTP method, and a route path. The from field simply says that this message originated from the friend service. The body field is left empty since we don’t need it for this example. However, you’ll want to use it with POST and PUT operations where the body is significant.

So what does the createUMFMessage function do with the object we passed it? If we console.log the return message we’d see something like this:

{
  "to": "hello:[get]/",
  "from": "friend:/",
  "mid": "7353c34e-c52e-4cce-a165-ca5a5e100f54",
  "timestamp": "2016-12-30T14:34:46.685Z",
  "version": "UMF/1.4.3",
  "body": {}
}

The createUMFMessage call is essentially a helper function that adds other UMF fields which are useful for routing and message tracking.

Now that we have a message we use the hydra.makeAPIRequest to actually send it.

hydra.makeAPIRequest(message)
  .then((response) => {
    console.log('response', response);
  });

The makeAPIRequest, like many Hydra methods, returns a promise. We add a .then handler to log out the message response.

Two important call outs here is that creating a message is really easy and we don’t have to concern ourselves with where the hello service is actually located.

When we try this example we’ll see an output response similar to:

response { msg: 'hello from hello - 3a18ce68a67bfdca75595024d3dc4998' }

That’s the response from the hello service. If you’re running multiple instances of the hello service, you’d see the service instance ID change between calls.

And it gets better since Hydra supports non-HTTP messaging.

Let’s look at an example which doesn’t use ExpressJS routes. To do this, we’ll need to change our hello service and friend service slightly.

First, let’s consider minor changes to the friend service.

var hydraExpress = require('fwsp-hydra-express');
var hydra = hydraExpress.getHydra();
var config = require('./config.json');

config.hydra.serviceName = 'friend';

hydraExpress.init(config, () => {})
  .then((serviceInfo) => {
    console.log('serviceInfo', serviceInfo);

    hydra.on('message', (message) => {
      console.log('message reply', message);
    });

    let message = hydra.createUMFMessage({
      to: 'hello:/',
      frm: 'friend:/',
      bdy: {}
    });

    hydra.sendMessage(message);
  })
  .catch(err => console.log('err', err));

So again we don’t define any HTTP routes. As we scan the code above, we see the addition of an event handler, the hydra.on method. In this example, this handler simply logs any messages that Hydra receives from other hydra-enabled applications. In more complex situations we might dispatch messages to other parts of our application and even other services.

Next, we see that when we create our message we don’t specify the HTTP get method using hello:[get]/ as we did earlier. The reason is that we’re not using HTTP in this case. Another difference is that the UMF key names seem to be abbreviated. Internally Hydra uses a short form of UMF in order to reduce message sizes. There are ways to convert from short to long message format – but we won’t concern ourselves with that in this example.

The next thing we see is the use of a new Hydra function called sendMessage. And that’s all we need to do to send a message.

Let’s turn our attention back to our hello service. Update your index.js with the following code.

var hydraExpress = require('fwsp-hydra-express');
var hydra = hydraExpress.getHydra();
var config = require('./config.json');

hydraExpress.init(config, () => {})
  .then((serviceInfo) => {
    console.log('serviceInfo', serviceInfo);
    hydra.on('message', (message) => {
      let messageReply = hydra.createUMFMessage({
        to: message.frm,
        frm: 'hello:/',
        bdy: {
          msg: `hello from ${hydra.getServiceName()} - ${hydra.getInstanceID()}`
        }
      });
      hydra.sendMessage(messageReply);
    });
    return 0;
  })
  .catch(err => console.log('err', err));

Here we simply define an on message handler using Hydra. When a message arrives, we create a response sending back the familiar service name and service instance ID. Note that this time we’re sending back data in the body field.

So to recap, creating, and sending messages is really simple. Receiving messages is simply a matter of defining an on message handler.

If we update the index.js and run it, then update our friend.js file and run it – we should see something like this in the output from the friend service.

message reply { to: 'friend:/',
  frm: 'hello:/',
  mid: 'a2b29527-a5f8-41bc-b780-ca4f7cdd9557',
  ts: '2016-12-30T15:28:03.554Z',
  ver: 'UMF/1.4.3',
  bdy: { msg: 'hello from hello - a3d3272535dbd651e896ed10dd2e03b9' } }

In this example, we saw two services communicating without the use of HTTP calls. Instead, our services used Hydra routable messages.

In fact, to build microservice like this we don’t even need to use ExpressJS or even hydra-express. We can simply build Node applications using hydra-core. This is an important option if you prefer a framework other than ExpressJS. Hydra core and HydraExpress are just libraries. You get to decide where and how you use them.

Building ExpressJS-based microservices using Hydra – Conclusion

In this brief introduction, we’ve seen how hydra-express apps support microservice concerns such as service discovery, message routing, load balancing, presence, and health monitoring.

And this is just the tip of the iceberg; there’s a lot more which is beyond the scope of this post.

UPDATEI wrote another article on using Hydra to build a microservices game as an example application. I recommend to check it out that one too!

We built Hydra because we felt that building Microservices should be easier. At Flywheel Sports, Hydra is under active development, and we’ve already seen significant productivity gains as our teams use Hydra to build our AWS hosted, dockerized, production-level microservices. We invite you to learn more about Hydra and join us in contributing to its development.

Learn more on our Hydra Repo.

This article is written by Carlos Justiniano. The author’s bio:
“Veteran software developer, world record holder, author & photographer. Currently Chief Architect at Flywheel Sports. More: http://cjus.me/”

Share this post

Twitter
Facebook
LinkedIn
Reddit