When building something new – a minimal viable product for example – starting with microservices is hard and time-wasting. You don’t know what the product will be so defining the services itself is not possible. Because of this, companies should start building majestic monolithic architectures – but with the team and user base growing you may need to rethink that approach.
The monolithic architecture
As DHH points out as well, the monolith can work pretty well for small companies. With your team growing, you are going to step on each other’s feet more and more often; and have fun with never-ending merge conflicts.
To solve these problems you have to make changes – changes affecting not just the structure of your application but the organization as well: introducing microservicesMicroservices are not a tool, rather a way of thinking when building software applications. Let's begin the explanation with the opposite: if you develop a single, self-contained application and keep improving it as a whole, it's usually called a monolith. Over time, it's more and more difficult to maintain and update it without breaking anything, so the development cycle may....
Of course, stopping the product development for months, or even years to make this change is unacceptable, you have to do it in baby steps. This is when evolutionary design comes into the picture.
Evolutionary design
Evolutionary design is a software development practice of creating and modifying the design of a system as it is developed, rather than purporting to specify the system completely before development starts.
Translating this definition to monoliths and microservices: you start with a monolithic architecture, then as the complexity and team grow you introduce microservices. But how?
Let’s take the following example of a monolithic system:
In this example application we have a key-value store for volatile data for caching purposes, and a document store information we want to maintain on the longer run. Also, this application is communicating with external APIs, like payment providers or Facebook.
Let’s see how to add new features as services!
Adding features / services to APIs
The simplest possible scenario here is that you build an API. In this case, your API is shown as a single application to the outside world – when introducing microservices you don’t want to change that.
As a solution, you can add a proxy in front of the legacy API server. At the beginning, all the request will go to the legacy application, and as new logic is added or old ones are moved to services only the routing table has to be modified in the proxy.
The proxy in this example can be anything from nginx through node-http-proxy – both supports extensions, so you can move logic like authentication there
Adding features / services to web applications
In this scenario, the main difference is that you have a legacy application with a user interface. Adding features here can be a little bit trickier if you want them to serve the UI part as well.
You have two approaches here – both can work quite well:
- adding new features as SPAs in signed iframes
- adding new features as APIs and frontend components
Note: you will have to touch the legacy application at least a little to add new services
Security perspectives
When you are adding new services to a legacy system, one of the key aspects should be security. How are these services going to communicate with the old one? How are services going to communicate with each other? Just a few questions to answer before jumping into the unknown.
Again, you have options:
- do the authentication on the proxy level
- authenticate using the legacy application
What we usually do in these cases is go with request signing – it works well in both cases. In the first the proxy can validate the signature while in the second case the legacy application has to sign the requests.
Of course, you can use the same request signing when new services communicate with each other. If your services are built using Node.jsNode.js is an asynchronous event-driven JavaScript runtime and is the most effective when building scalable network applications. Node.js is free of locks, so there's no chance to dead-lock any process., you can use the node-http-signature by Joyent. In practice, it will look something like this on the server:
var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');
var options = {
key: fs.readFileSync('./key.pem'),
cert: fs.readFileSync('./cert.pem')
};
https.createServer(options, function (req, res) {
var rc = 200;
var parsed = httpSignature.parseRequest(req);
var pub = fs.readFileSync(parsed.keyId, 'ascii');
if (!httpSignature.verifySignature(parsed, pub))
rc = 401;
res.writeHead(rc);
res.end();
}).listen(8443);
To call this endpoint, you have to do something like this:
var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');
var key = fs.readFileSync('./key.pem', 'ascii');
var options = {
host: 'localhost',
port: 8443,
path: '/',
method: 'GET',
headers: {}
};
// Adds a 'Date' header in, signs it, and adds the
// 'Authorization' header in.
var req = https.request(options, function(res) {
console.log(res.statusCode);
});
httpSignature.sign(req, {
key: key,
keyId: './cert.pem'
});
req.end();
But why the hassle with all the request signing? Why not just use a token for communication? My reasons:
- exposing the secret (the token) between services is not a good practice – in that case, TLS is a single point of failure
- you have no way to tell where the request originates from – anyone with the token can send valid requests
With request signing, you have shared secrets for services. With that secret, you sign your requests and the secret itself will never be exposed. For more on the topic read our Node.js Security and Web Authentication Methods Explained articles.
Changes in the organization
When building monolithic architectures, the organization is usually built around functional teams. Managers work with other managers, engineers work with engineers. The main problem with this approach is that it introduces communication problems: units spend a lot of time with meetings instead of actual work. Also, there are a lot of dependencies between these units that has to be resolved.
On the other hand, with microservices cross-functional teamsA cross-functional team has individuals with different roles like database engineers, testers, infrastructure engineers, etc. Cross-functional teams can ship code faster than functional teams because they can make their own decisions and work independently within an organization. come hand-in-hand: these teams have individuals with different roles like database engineers, testers, infrastructure engineers, designers. These cross-functional teams are built around business needs, so they can make decisions much faster.
For more on the topic, please refer to the Benefits of Cross-Functional Teams When Building Microservices article.
Summary
Killing the monolith and introducing microservices takes time and need a relatively big effort not just from the engineers but from the managers of the company as well. You can think of this transition as an investment for the future growth of the company: once you are done with it your engineering team will move a lot faster, shipping features sooner with less effort.
If you want to read more on the topic, feel free to subscribe to Microservice Weekly: a free, weekly newsletter with the best news and articles on microservices, hand-curated each week.