Node.js Development & Consulting

Hire our experts

Try our Node.js monitoring tool

microservices

Monitoring Microservices Architectures: Enterprise Best Practices

Monitoring Microservices Architectures: Enterprise Best Practices

By reading the following article, you can get insight on how lead engineers at IBM, Financial Times and Netflix think about the pain-points of application monitoring and what are their best practices for maintaining and developing microservices. Also, I’d like to introduce a solution we developed at RisingStack, which aims to tackle the most important issues with monitoring microservices architectures.


Tearing down a monolithic application into a microservices architecture brings tremendous benefits to engineering teams and organizations. New features can be added without rewriting other services. Smaller codebases make development easier and faster, and the parts of an application can be scaled separately.

Unfortunately, migrating to a microservices architecture has its challenges as well since it requires complex distributed systems, where it can be difficult to understand the communication and request flow between the services. Also, monitoring gets increasingly frustrating thanks to a myriad of services generating a flood of unreliable alerts and un-actionable metrics.

Visibility is crucial for IBM with monitoring microservices architectures

Jason McGee, Vice President and Chief Technical Officer of Cloud Foundation Services at IBM let us take a look at the microservice related problems enterprises often face in his highly recommended Dockercon interview with The New Stack.

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant microservices applications using Trace
Learn more

For a number of years - according to Jason - developer teams were struggling to deal with the increasing speed and delivery pressures they had to fulfill, but with the arrival of microservices, things have changed.

Migrating from the Monolith to a Microservices Architecture

In a microservices architecture, a complex problem can be broken up into units that are truly independent, so the parts can continue to work separately. The services are decoupled, so people can operate in small groups with less coordination and therefore they can respond more quickly and go faster.

“It’s interesting that a lot of people talk about microservices as a technology when in reality I think it’s more about people, and how people are working together.”

The important thing about microservices for Jason is that anyone can give 5 or 10 people responsibility for a function, and they can manage that function throughout its lifecycle and update it whenever they need to - without having to coordinate with the rest of the world.

“But in technology, everything has a tradeoff, a downside. If you look at microservices at an organization level, the negative trade-off is the great increase in the complexity of operations. You end up with a much more complex operating environment.”

Right now, a lot of activity in the microservices space is about that what kind of tools and management systems teams have to put around their services to make microservices architectures a practical thing to do, said Jason. Teams with microservices have to understand how they want to factor their applications, what approaches they want to take for wiring everything together, and how can they reach the visibility of their services.

The first fundamental problem developers have to solve is how the services are going to find each other. After that, they have to manage complexity by instituting some standardized approach for service discovery. The second biggest problem is about monitoring and bringing visibility to services. Developers have to understand what’s going on, by getting visibility into what is happening in their cloud-based network of services.

Describing this in a simplified manner: an app can have hundreds of services behind the scene, and if it doesn’t work, someone has to figure out what’s going on. When developers just see miles of logs, they are going to have a hard time tracing back a problem to its cause. That’s why people working with microservices need excellent tools providing actionable outputs.

“There is no way a human can map how everyone is talking to everyone, so you need new tools to give you the visibility that you need. That’s a new problem that has to be solved for microservices to became an option.”


At RisingStack, as an enterprise Node.js development and consulting company, we experienced the same problems with microservices since the moment of their conception.

Our frustration of not having proper tools to solve these issues led us to develop our own solution called Trace, a microservice monitoring tool with distributed transaction tracking, error detection, and process monitoring for microservices. Our tool is currently in an open beta stage, therefore it can be used for free.

If you’d like to give it a look, we’d appreciate your feedback on our Node.js monitoring platform.


Financial Times eases the pain of monitoring microservices architectures with the right tools and smart alerts

Sarah Wells, Principal Engineer of Financial Times told the story of what it’s like to move from monitoring a monolithic application to monitoring a microservice architecture in her Codemotion presentation named Alert overload: How to adopt a microservices architecture.

About two years ago Financial Times started working on a new project where their goal was to build a new content platform (Fast FT) with a microservices architecture and APIs. The project team also started to do DevOps at the same time, because they were building a lot of new services, and they couldn’t take the time to hand them over to a different operations team. According to Sarah, supporting their own services meant that all of the pain the operations team used to have was suddenly transferred to them when they did shoddy monitoring and alerting.

“Microservices make it worse! Microservices are an efficient device for transforming business problems into distributed transaction problems.”

It’s also important to note here, that there’s a lot of things to like about microservices as Sarah mentioned:

“I am very happy that I can reason about what I’m trying to do because I can make changes live to a very small piece of my system and roll back really easily whenever I want to. I can change the architecture and I can get rid of the old stuff much more easily than I could when I was building a monolith.”

Let’s see what was the biggest challenge the DevOps team at Financial Times faced with a microservice architecture. According to Sarah, monitoring suddenly became much harder because they had a lot more systems than before. The app they built consisted of 45 microservices. They had 3 environments (integration, test, production) and 2 VM’s for each of those services. Since they ran 20 different checks per service (for things like CPU load, disk status, functional tests, etc.) and they ran them every 5 minutes at least. They ended up with 1,500,000 checks a day, which meant that they got alerts for unlikely and transient things all the time.

“When you build a microservices architecture and something fails, you’re going to get an alert from a service that’s using it. But if you’re not clever about how you do alerts, you’re also going to get alerts from every other service that uses it, and then you get a cascade of alerts.”

One time a new developer joined Sarah’s team he couldn’t believe the number of emails they got from different monitoring services, so he started to count them. The result was over 19,000 system monitoring alerts in 50 days, 380 a day on average. Functional monitoring was also an issue since the team wanted to know when their response time was getting slow or when they logged or returned an error to anyone. Needless to say, they got swamped by the amount of alerts they got, namely 12,745 response time or error alerts in 50 days, 255 a day on average.

Monitoring a Microservices Architecture can cause trouble with Alerting

Sarah and the team finally developed three core principles for making this almost unbearable situation better.

1.Think about monitoring from the start.

The Financial Times team created far too many alerts without thinking about why they were doing it. As it turned out, it was the business functionality they really cared about, not the individual microservices - so that’s what their alerting should have focused on. At the end of the day, they only wanted an alert when they needed to take action. Otherwise, it was just noise. They made sure that the alerts are actually good because anyone reading them should be able to work out what they mean and what is needed to do.

According to Sarah’s experiences, a good alert has clear language, is not fake, and contains a link to more explanatory information. They had also developed a smart solution: they tied all of their microservices together by passing around transaction ID’s as request headers, so the team instantly knew that if an error was caused thanks by an event in the system, and they could even search for it. The team also established health checks for every RESTful application, since they wanted to know early about problems that could affect their customers.

2.Use the right tools for the job.

Since the platform Sarah’s team have been working on was an internal PaaS, they figured out that they needed some tooling to get the job done. They used different solutions for service monitoring, log aggregation, graphing, real-time error analysis, and also built some custom in-house tools for themselves. You can check out the individual tools in Sarah’s presentation from slide51.

The main takeaway from their example was that they needed tools that could show if something happened 10 minutes ago but disappeared soon after - while everyone was in a meeting. They figured out the proper communication channel for alerting: it was not email, but Slack! The team had also established a clever reaction system to tag solved and work in progress issues in Slack.

3.Cultivate your alerts

As soon as you stop paying attention to alerts, things will go wrong. When Sarah’s team gets an alert, they are reviewing it and acting on it immediately. If the alert isn’t good, they are either getting rid of it or making it better. If it isn’t helpful, they make sure that it won’t get sent again. It’s also important to make sure that alerts didn’t stop working. To check this, the team of FT often breaks things deliberately (they actually have a chaos monkey), just to make sure that alerts do fire.

How did the team benefit from these actions? They were able to turn off all emails from system monitoring and they could carry on with work while they were still able to monitor their systems. Sarah ended her presentation with a huge recommendation for using microservices and with her previously discussed pieces of advice distilled in a brief form:

“I build microservices because they are good, and I really like working with them. If you do that, you have to appreciate that you need to work at supporting them. Think about monitoring from the start, make sure you have the right tools and continue to work on your alerts as you go.”

Death Star diagrams make no sense with Microservices Architectures

Adrian Cockroft had the privilege to gain a tremendous amount of microservices related experience by working as Chief Architect for 7 years at Netflix - a company heavily relying on a microservices architecture to provide excellent user experience.

According to Adrian, teams working with microservices have to deal with three major problems right now.

“When you have microservices, you end up with a high rate of change. You do a code push and floods of new microservices appear. It’s possible to launch thousands of them in a short time, which will certainly break any monitoring solution.”

The second problem is that everything is ephemeral: Short lifetimes make it hard to aggregate historical views of services, and hand tweaked monitoring tools take too much work to keep running.

“Microservices have increasingly complex calling patterns. These patterns are hard to figure out with 800 microservices calling each other all the time. The visualization of these flows gets overwhelming, and it’s hard to render so many nodes.”

These microservice diagrams may look complicated, but looking inside a monolith would be even more confusing because it’s tangled together in ways you can’t even see. The system gets tangled together, like a big mass of spaghetti - said Adrian.

A Microservices Architecture often looks like Death Star diagrams Furthermore, managing scale is a grave challenge in the industry right now, because a single company can have tens of thousands of instances across five continents and that makes things complicated. Tooling is crucial in this area. Netflix built its own in-house monitoring tool. Twitter made its own tool too, which is called Zipkin (an open source Java monitoring tool based on Google’s Dapper technology). The problem with these tools is when teams look at the systems they have successfully mapped out, they often end up with the so-called Death Star diagrams.

“Currently, there are a bunch of tools trying to do monitoring in a small way - they can show the request flow across a few services. The problem is, that they can only visualize your own bounded context - who are your clients, who are your dependencies. That works pretty well, but once you’re getting into what’s the big picture with everything, the result will be too difficult to comprehend.”

For Adrian, it was a great frustration at Netflix that every monitoring tool they tried exploded on impact. Another problem is that using, or even testing monitoring tools at scale gets expensive very quickly. Adrian illustrated his claim with a frightening example: The single biggest budget component for Amazon is the monitoring system: it takes up 20% of the costs.

“Pretty much all of the tools you can buy now understand datacenters with a hundred nodes, that’s easy. Some of them can understand cloud. Some of them can get to a few thousand nodes. There’s a few alpha and beta monitoring solutions that claim they can get to the ten thousands. With APM’s you want to understand containers, because your containers might be coming and going in seconds - so event-driven monitoring is a big challenge for these systems.”

According to Adrian, there is still hope since the tools that are currently being built will get to the point where the large scale companies can use them as commercial products.


If you have additional thoughts on the topic, feel free to share it in the comments section.

Killing the Monolith

Killing the Monolith

When building something new - a minimal viable product for example - starting with microservices is hard and time-wasting. You don’t know what the product will be so defining the services itself is not possible. Because of this, companies should start building majestic monolithic architectures - but with the team and user base growing you may need to rethink that approach.

The monolithic architecture

As DHH points out as well, the monolith can work pretty well for small companies. With your team growing, you are going to step on each other’s feet more and more often; and have fun with never-ending merge conflicts.

To solve these problems you have to make changes - changes affecting not just the structure of your application but the organization as well: introducing microservices.

Of course, stopping the product development for months, or even years to make this change is unacceptable, you have to do it in baby steps. This is when evolutionary design comes into the picture.

Evolutionary design

Evolutionary design is a software development practice of creating and modifying the design of a system as it is developed, rather than purporting to specify the system completely before development starts.

Node.js Monitoring and Debugging from the Experts of RisingStack

Trace helps switching from the monolith to microserices
Learn more

Translating this definition to monoliths and microservices: you start with a monolithic architecture, then as the complexity and team grow you introduce microservices. But how?

Let’s take the following example of a monolithic system:

monolithic architecture example

In this example application we have a key-value store for volatile data for caching purposes, and a document store information we want to maintain on the longer run. Also, this application is communicating with external APIs, like payment providers or Facebook.

Let’s see how to add new features as services!

Adding features / services to APIs

The simplest possible scenario here is that you build an API. In this case, your API is shown as a single application to the outside world - when introducing microservices you don't want to change that.

As a solution, you can add a proxy in front of the legacy API server. At the beginning, all the request will go to the legacy application, and as new logic is added or old ones are moved to services only the routing table has to be modified in the proxy.

proxy in a monolithic architecture

The proxy in this example can be anything from nginx through node-http-proxy - both supports extensions, so you can move logic like authentication there

Adding features / services to web applications

In this scenario, the main difference is that you have a legacy application with a user interface. Adding features here can be a little bit trickier if you want them to serve the UI part as well.

You have two approaches here - both can work quite well:

  • adding new features as SPAs in signed iframes
  • adding new features as APIs and frontend components

adding new services to a monolithic architecture

Note: you will have to touch the legacy application at least a little to add new services

Security perspectives

When you are adding new services to a legacy system, one of the key aspects should be security. How are these services going to communicate with the old one? How are services going to communicate with each other? Just a few questions to answer before jumping into the unknown.

Again, you have options:

  • do the authentication on the proxy level
  • authenticate using the legacy application

What we usually do in these cases is go with request signing - it works well in both cases. In the first the proxy can validate the signature while in the second case the legacy application has to sign the requests.

Of course, you can use the same request signing when new services communicate with each other. If your services are built using Node.js, you can use the node-http-signature by Joyent. In practice, it will look something like this on the server:

var fs = require('fs');  
var https = require('https');  
var httpSignature = require('http-signature');

var options = {  
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
};

https.createServer(options, function (req, res) {  
  var rc = 200;
  var parsed = httpSignature.parseRequest(req);
  var pub = fs.readFileSync(parsed.keyId, 'ascii');
  if (!httpSignature.verifySignature(parsed, pub))
    rc = 401;

  res.writeHead(rc);
  res.end();
}).listen(8443);

To call this endpoint, you have to do something like this:

var fs = require('fs');  
var https = require('https');  
var httpSignature = require('http-signature');

var key = fs.readFileSync('./key.pem', 'ascii');

var options = {  
  host: 'localhost',
  port: 8443,
  path: '/',
  method: 'GET',
  headers: {}
};

// Adds a 'Date' header in, signs it, and adds the
// 'Authorization' header in.
var req = https.request(options, function(res) {  
  console.log(res.statusCode);
});


httpSignature.sign(req, {  
  key: key,
  keyId: './cert.pem'
});

req.end();  

But why the hassle with all the request signing? Why not just use a token for communication? My reasons:

  • exposing the secret (the token) between services is not a good practice - in that case, TLS is a single point of failure
  • you have no way to tell where the request originates from - anyone with the token can send valid requests

With request signing, you have shared secrets for services. With that secret, you sign your requests and the secret itself will never be exposed. For more on the topic read our Node.js Security and Web Authentication Methods Explained articles.

Changes in the organization

When building monolithic architectures, the organization is usually built around functional teams. Managers work with other managers, engineers work with engineers. The main problem with this approach is that it introduces communication problems: units spend a lot of time with meetings instead of actual work. Also, there are a lot of dependencies between these units that has to be resolved.

On the other hand, with microservices cross-functional teams come hand-in-hand: these teams have individuals with different roles like database engineers, testers, infrastructure engineers, designers. These cross-functional teams are built around business needs, so they can make decisions much faster.

For more on the topic, please refer to the Benefits of Cross-Functional Teams When Building Microservices article.

Summary

Killing the monolith and introducing microservices takes time and need a relatively big effort not just from the engineers but from the managers of the company as well. You can think of this transition as an investment for the future growth of the company: once you are done with it your engineering team will move a lot faster, shipping features sooner with less effort.

If you want to read more on the topic, feel free to subscribe to Microservice Weekly: a free, weekly newsletter with the best news and articles on microservices, hand-curated each week.


How Enterprises Benefit From Microservices Architectures

How Enterprises Benefit From Microservices Architectures

Building a microservices architecture in an enterprise environment has tremendous benefits:

  • Microservices do not require teams to rewrite the whole application if they want to add new features.
  • Smaller codebases make maintenance easier and faster. This saves a lot of development effort and time, therefore increases overall productivity.
  • The parts of an application can be scaled separately and are easier to deploy.

After reading this article you will gain valuable insights on the best practices, benefits, and pain-points of using microservices, based on the experiences of highly innovative enterprises like Walmart, Spotify and Amazon.


Walmart Successfully Revitalized its Failing Architecture with Microservices

What can an enterprise do when its aging architecture finally begins to negatively affect business?

This is the multi-million dollar question which the IT Department of Walmart Canada had to address after they were failing to provide to their users on Black Fridays for two years in a row - according to Kevin Webber who helped to re-architect the retail giant's online business.

“It couldn’t handle 6 million pageviews per minute and made it impossible to keep any kind of positive user experience anymore.”

Before embracing microservices, Walmart had an architecture for the internet of 2005, designed around desktops, laptops and monoliths. The company decided to replatform its old legacy system in 2012 since it was unable to scale for 6 million pageviews per minute and was down for most of the day during peak events. They wanted to prepare for the world by 2020, with 4 billion people connected, 25+ million apps available, and 5.200 GB of data for each person on Earth.

Walmart replatformed to a microservices architecture with the intention of achieving close to 100% availability with reasonable costs.

Walmart prepares with a microservices architecture ready for the World by 2020

“It’s important to have a system elastic enough to scale out to handle peak without negatively impacting experience.”

Migrating to microservices caused a significant business uplift for the company:

  • conversions were up by 20% literally overnight
  • mobile orders were up by 98% instantly
  • no downtime on Black Friday or Boxing Day (The Black Friday of Canada) zero downtime since the replatforming

The operational savings were significant as well since the company moved off of its expensive hardware onto commodity hardware (cheap virtual x86 servers). They saved 40% of the computing power and experienced 20-50% cost savings overall.

“Building microservice architectures are really the key to staying in front of the demands of the market. It’s not just a sort of replatforming for the sake of technology. It’s about the overall market in general, about what users expect and what business expects to stay competitive.“

Node.js Monitoring and Debugging from the Experts of RisingStack

Build performant microservices applications using Trace
Learn more

Spotify Builds Flawless User Experience with Microservices

Kevin Goldsmith, VP of Engineering at Spotify knows from experience that an enterprise which intends to move fast and stay innovative in a highly competitive market requires an architecture that can scale.

Spotify serves 75 million active users per month, with an average session length of 23 minutes, while running incredibly complex business roles behind the scenes. They also have to watch out for their competitors, Apple and Google.

“If you’re worried about scaling to hundreds of millions of users, you build your system in a way that you scale components independently.”

Spotify is built on a microservice architecture with autonomous full-stack teams in charge in order to avoid synchronization hell within the organization.

“The problem is, if you want to build a new feature in this kind of (monolithic) world, then the client team have to ask the core team: please get us an API and let us do this. The core team asks the server team: please implement this on the server side so we can do whatever we need to do. And after that, the server team has to ask the infrastructure team for a new database. It is a lot of asking.”

Spotify has 90 teams, 600 developers, and 5 development offices on 2 continents building the same product, so they needed to minimize these dependencies as much as possible.

Spotify builds microservices architectures with full-stack DevOps teams

That’s why they build microservices with full-stack teams, each consisting of back-end developers, front-end developers, testers, a UI designer, and a product owner as well. These teams are autonomous, and their mission does not overlap with other teams mission.

“Developers deploy their services themselves and they are responsible for their own operations too. It’s great when teams have operational responsibility. If they write crummy code, and they are the ones who have to wake up every night to deal with incidents, the code will be fixed very soon.”

Spotify’s microservices are built in very loosely coupled architectures. There aren’t any strict dependencies between individual components.

Kevin mentioned the main challenges of working with microservices:

  • They are difficult to monitor since thousands of instances are running at the same time.
  • Microservices are prone to create increased latency: instead of calling a single process, Spotify is calling a lot of services, and these services are calling other services too, so the latency grows through each of these calls.

Spotify moved to a microservices architecture to move fast and stay innovative

However, building a microservice architecture has its clear benefits for enterprises according to him:

  • It’s easy to scale based on real-world bottlenecks: you can identify the bottlenecks in your services and replicate or fix them there without massive rewrites.
  • It’s way easier to test: test surface is smaller, and they don’t do that much as big monolithic applications, so developers can test services locally - without having to deploy them to a test environment.
  • It’s easier to deploy: applications are smaller, so they deploy really fast.
  • Easier monitoring (in some sense): services are doing less so it’s easier to monitor each of these instances.
  • Services can be versioned independently: there’s no need to add support for multiple versions in the same instances, so they don’t end up adding multiple versions to the same binary.
  • Microservices are less susceptible to large failures: big services fail big, small services fail small.

Building a microservices architecture allows Spotify to have a large number of services down at the same time without the users even noticing it. They’ve built their system assuming that services can fail all the time, so individual services that could be failing are not doing too much, so they can't ruin the experience of using Spotify.

Kevin Goldsmith, VP of Engineering at Spotify ended his speech with a big shoutout to those who are hesitating about embracing microservices in an enterprise environment:

“We’ve been doing microservices at Spotify for years. We do it on a pretty large scale. We do it with thousands and thousand of running instances. We have been incredibly happy with it because we have scaled stuff up. We can rewrite our services at will - which we do, rather than continue to refactor them or to add more and more technical data over time. We just rewrite them when we get to a scaling inflection point. We do this kind of stuff all the time because it’s really easy with this kind of architecture, and its working incredibly well for us. So if you are trying to convince somebody at your company, point to Spotify, point to Netflix, point to other companies and say: This is really working for them, they’re super happy with it.”


Amazon Embraced the DevOps Philosophy with Microservices and Two-Pizza Teams

Rob Birgham, senior AWS product manager shared the story of how Amazon embraced the DevOps philosophy while they migrated to a microservice infrastructure.

He began his speech with a little retrospection: in 2001, the Amazon.com retail website was a large architectural monolith. It was architected in multiple tiers, and those tiers had many components in them, but they were coupled together very tightly, and behaved like one big monolith.

“A lot of startups and enterprise projects start out this way. They take a monolith first approach, because it’s very quick, but over time, as that project matures and has more developers on it, as it grows and the codebase gets more large, and the architecture gets more complex, that monolith is going to add overhead to your process, and the software development lifecycle is going to slow down.”

How did this affect Amazon? They had a large number of developers working on one big monolithic website, and even though each one of these developers only worked on a very small piece of that application, they still needed to deal with the overhead of coordinating their changes with everyone else who was also working on the same project.

Amazon embraced microservices architecture to shorten the development lifecycle

When they were adding a new feature or making a bugfix, they needed to make sure that the change is not going to break something else on that project. If they wanted to update a shared library to take advantage of a new feature, they needed to convince everyone else on that project to upgrade to the new shared library at the same time. If they wanted to make a quick fix - to push out to their customers quickly - they couldn’t just do it on their own schedule; they had to coordinate that with all the other developers who have been processed changes at the same time.

“This lead to the existence of something like a merge Friday or a merge week - where all the developers took their changes, merged them together into one version, resolved all the conflicts, and finally created a master version that was ready to move out into production.“

Even when they had that large new version, it still added a lot of overhead to the delivery pipeline. The whole new codebase needed to be rebuilt, all of the test cases needed to be rerun, and after that they had to take the whole application and deploy it to the full production fleet.

Fun fact: In the early 2000’s Amazon even had an engineering group whose sole job was to take these new versions of the application and manually push it across Amazon's production environment.

It was frustrating for the software engineers, and most importantly, it was slowing down the software development lifecycle, the ability to innovate, so they made architectural and organizational changes - big ones.

Amazon makes 50 million deployments a year thanks to DevOps and Microservices architecture

These big changes began on an architectural level: Amazon went through its monolithic application and teased it apart into a Service Oriented Architecture.

“We went through the code and pulled out functional units that served a single purpose and wrapped those with a web service interface. We then established a rule, that from now on, they can only talk to each other through their web service APIs.”

This enabled Amazon to create a highly decoupled architecture, where these services could iterate independently from each other without any coordination between those services as long as they adhered to that standard web service interface.

“Back then it didn’t have a name, but now we call it as a microservice architecture.”

Amazon also implemented changes in how their organization operated. They broke down their one, central, hierarchical product development team into small, “two-pizza teams”.

“We originally wanted teams so small that we could feed them with just two pizzas. In reality, it’s 6-8 developers per team right now.”

Each of these teams were given full ownership of one or a few microservices. And by full ownership they mean everything at Amazon: They are talking to the customers (internal or external), they are defining their own feature roadmap, designing their features, implementing their features, then test them, deploy them and operate them.

If anything goes wrong anywhere in that full lifecycle, these two-pizza teams are the ones accountable for fixing it. If they choose to skimp on their testing and are unknowingly releasing bad changes into production, the same engineers have to wake up and fix the service in the middle of the night.

This organizational restructuring properly aligned incentives, so engineering teams are now fully motivated to make sure the entire end-to-end lifecycle operates efficiently.

“We didn’t have this term back then, but now we call it a DevOps organization. We took the responsibilities of development, test, and operations, and merged those all into a single engineering team.”

After all these changes were made, Amazon dramatically improved its front-end development lifecycle. Now the product teams can quickly make decisions and crank out new features for their microservices. The company makes 50 million deployments a year, thanks to the microservice architecture and their continuous delivery processes.

“How can others do this? There is not one right answer for every company. A company needs to look at cultural changes, organizational changes, and process changes. Also, there is one common building block that every DevOps transformation needs: That is to have an efficient and reliable continuous delivery pipeline.”


Every technology has a downside. If we consider microservices on an organization level, the negative trade-off is clearly the increase in the complexity of operations. There is no way a human can ultimately map how all of the services are talking to each other, so companies need tools to grant the visibility of their microservice infrastructure.

At RisingStack, our enterprise microservice development and consulting experience inspired us to create a monitoring tool called Trace, which allows engineers to successfully tackle the most common challenges during the full lifecycle of microservices: transaction tracking, anomaly detection, service topology and performance monitoring.

Get started for free

Do you have additional insights on the topic? Share it in the comments.

Benefits of Cross-Functional Teams When Building Microservices

If you want your cross-functional teams to be successful, the first thing you need to do is to make sure that your organization can adapt. The software you create reinforces the culture of your company.

Agility is not the goal: it’s a method to solve a problem. Since the external environment can change faster than a company itself, it may need to change its pace as well. But it isn’t about sending out an email to all employees that the organization applies Scrum starting from next week: the transformation must happen on all levels. You need to make sure that there aren’t any roadblocks within your company that could slow down the speed of information. This applies to everything from feedback loops to knowledge centers that everyone can access, so they don’t need to spend time looking for the information they want to use.

Company culture must be prepared to support the transformation and adapt agile practices. Most people try to avoid being part of the ‘company transformation process’ since mass layoffs usually accompany it. Give people time to adapt and the resources to make it easier for them. Also, if you try to transform the middle managers into coaches first, they can support their colleagues well.

Functional vs cross-functional teams

A team completely owns a product during its whole lifetime. They don’t just create it, they are also responsible for maintaining it. This makes cross-functional teams perfect candidates for building microservices.

In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project.

Separating teams by functions creates distance between them. If a separate QA team does the testing and developers are strictly focusing on writing code, they often don’t care much about testing and your product can end up with a lot of features that don’t work properly. A cross-functional team has individuals with different roles like database engineers, testers, infrastructure engineers, etc. As we can see from numerous examples (such as Amazon, Netflix, and Gilt for example), this can result in an excellent product that works as intended and the users love it.

Functional (or often called “siloed”) departments often adopt an “us vs. them” thinking against other teams. Instead of better productivity, this is more likely to result in hostility against each other. Working with people from different backgrounds also enables you to view the project from a different point of view. This helps understand the real reason behind a conflict and resolve (or even prevent) it.

Project: a piece of code that has to offer some predefined business value, must be handed over to the client, and is then periodically maintained by a team.

Cross-functional teams can ship code faster than functional teams because they can make their own decisions and work independently within an organization. The teams can focus on improving their cycle time and implement continuous deployment in order to solve the challenges they face almost instantly.

Teams can be formed by a manager or by the team itself. In both cases there is an important question that needs to be answered: how should people be grouped together? The two options are by similar function or by similar business.

Similar function

Grouping by similar function means that managers work with other managers, engineers with engineers, or marketers with fellow marketers. Melvin Conway’s law states that “Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.” This is as true today as it was half a century ago. These are called functional units. They work the best if you can manage to hire the best people to build a superb team of specialists who are truly experts in their own field. The similar community enables them to learn from each other to master their job. The biggest challenge is that departments usually have difficulties when communicating with each other. For example, if the task of the UI team is to do an overhaul of the interface but the backend guys are still in the middle of something else, then the whole project will be delayed until the backend tasks are done - since the UI team can't even start the project.

Watch out for these signals. Constantly ordering work across capabilities, splitting stories between teams, having to move people around towards tasks, deploying in lock-step and fan-in for end-to-end testing all mean that Conway’s law is in effect in your organization.

Similar business

In this case, the people work together to deliver the same business value: a new feature a new project, or even a new product.

The teams need to be stable enough to get the job done and in exchange, they can move faster and more efficiently than teams grouped by similar functions. The communication is more likely to be oriented around the goal itself and not around the communication or management issues
across functional units, making this approach more efficient.

Challenges

Nearly 75% of cross-functional teams have challenges with at least three of the following five criteria, according to Harvard Business Review:

  • meeting a planned budget
  • staying on schedule
  • adhering to specifications
  • meeting customer expectations
  • maintaining alignment with the company’s corporate goals

The Kanban community points out that reorganizing already established teams can cost a lot more without having a system to organize the tasks for the teams. Before you decide to reorganize your whole company it may be worth to take a look at what already works and what doesn’t. If the not-so-optimal pace of the organization originates from the confused state of tasks on a low-level, then the reorganization itself won’t do much.

Building microservices

Microservices should be:

  • cheap to replace;
  • quick to scale;
  • fault tolerant;

Above all: they should allow you to go as fast as possible.

Siloed teams spend weeks with iterations. Because the teams build tightly coupled services, manual tests need to be performed at the same time for all services. This is far from going fast: the tests can often last for weeks.

The first steps towards cross-functional teams

When building microservices, teams can be organized around a single business purpose, and focus on continuous delivery to skip the long-lasting test periods.

Continuous delivery is a software development discipline where you build software in such a way that the software can be released to production at any time.

To achieve this, you need a collaborative working environment for everyone involved in delivery. This environment is the first step to have cross-functional teams.

What it means in practice: merge architects, testers, operations and development teams into a single (no bigger than 10-20 people) cross-functional team. This way, teams don’t have to pass a project around until they get the feedback they need, and delivering services don’t need to happen once in weeks.

James Lewis recommends using these best practices on the different levels within your organization:

  • Top-layer, at the line of business (across the whole company)
    • semantic versioning (define a major version of the software that every team can use within the company)
  • Value streams (group of teams within an organization that can deliver business value to the customer)
    • semantic versioning
    • consumer-driven contract testing
  • Inter-team layer (relationship between the individual teams)
    • tolerant reader
    • consumer-driven contract testing

How to make cross-functional teams efficient

To make cross-functional teams truly effective, they have to be able to operate independently. This way, the unit can complete a project or even a whole feature without requiring regular coordination or micromanagement. Of course, you need to know about what’s going on, but if the goals are clearly set, you don’t need to interfere, and all the tasks get done in time. There can be someone that reports to the VP’s or C-level executives, but the QA managers and the other mid-level managers are not a must anymore.

Constant re-evaluation assures that you’re making progress. If the market changes faster than a project develops, it may be necessary to kill it to save precious resources and divert them to another project that could achieve greater results within the same period. It’s not an easy thing to do, but it’s not worth to chase something into a dead end only to find out that you need to turn back.

The optimal size of a microservice is not necessarily ‘micro’. Amazon uses the size that a ‘two-pizza team’ (around a dozen people) can maintain, but there are setups where half a dozen people support half a dozen services. The concept of self-contained systems suggests using services larger than a microservice but still small enough to keep a team busy and provide meaningful value.

Netflix

Netflix decided to go with highly aligned and loosely coupled teams. The company set clear, specific, and broadly understood goals. The interactions between teams are focused on strategy and objectives, not tactics. Although it requires a large investment in management time to be transparent, they feel it was worth it.

Their teams try to keep meetings at the minimum. This is possible because the teams truly trust each other - without requiring layers of approvals. The leaders reach out proactively to help whenever they feel it’s appropriate and don’t focus on supervising each task of the team members.

Cisco

Cross-functional teams need a good project manager more than anything else. Cisco implemented a 3-layer structure: a group of specialists working on their tasks, a smaller core of people who communicate back to their teams and two vice presidents at the top. The conclusion was that every project should have an end-to-end leader who oversees the whole operation, and the individual teams should also have a leader as well. If the goals are clearly established, this setup helps to make sure that the teams won’t miss them.

Takeaways

  • The success with microservices isn’t just about using the right cloud service or container system. Organizations that embrace cross-functional teams can scale more quickly than a company with siloed teams trying to move to a microservice-based architecture. The key for that is effective communication: the right information goes to the right place at the right time.
  • Teams building microservices need sophisticated monitoring and logging setups for each service to keep track of both operational and business metrics. Trace allows you to measure both.
  • Conway’s law creates a loop: teams not just create a software that mirrors the structure of the organization, but it also reinforces the existing hierarchy.
  • Open source projects are a good example to follow: people work together from different functions towards a mutual goal. These projects also follow Conway’s law and become modular and easy to scale.

Our recently published report aims to address questions of Node.js adoption into enterprise organizations for cross-functional teams.

Read the Report

Top Experts on Microservices

If you’re looking for the brightest microservice experts to learn from, you’ve come to the right place.

Here are the best microservice experts worth following. As you know, there’s no shortage of high-quality talks and blogs about microservices on the web (especially now in 2015). But we decided to collect the absolute best developers you should definitely follow if you're interested in the topic.

Whether you’re a veteran software architect or a zero-to-hero developer, these experts give you the tips, insights and experiences you need to get the most out of your microservices.

Our list of the brightest microservice experts:

martin-fowler-microservices-expert

Martin Fowler

Martin is a British software engineer who works at ThoughtWorks and specializes in object-oriented analysis and design, UML, patterns, and agile software development methodologies, including extreme programming. He wrote half dozen books on software development, including Refactoring and Patterns of Enterprise Application Architecture.
Twitter: @martinfowler


sam-newman-microservices-expert

Sam Newman

Sam splits his time between consulting for clients at ThoughtWorks and speaking at conferences all over the world. Recently he focuses on working in the cloud and continuous delivery space, more recently focusing on the use of microservice architectures. He is the author of a book on the topic called Building Microservices.
Twitter: @samnewman


chad-fowler-microservices-expert

Chad Fowler

Chad writes both software and books: his best-seller is Rails Recipes and he also contributed to Tim Ferriss' The 4-Hour Body. He worked at 6Wunderkinder (acquired by Microsoft), the makers of Wunderlist, the highly popular to-do app.
Twitter: @chadfowler Github: chad


chris-richardson-microservices-expert

Chris Richardson

Chris is a software architect and serial entrepreneur who helps organizations improve their applications (including microservices). He is the founder of Eventuate, a platform for writing event-driven applications.
Twitter: @crichardson


cj-silverio-microservices-expert

C J Silverio

C J works at NPM and had a major role in the complete redesign of the NPM registry. She is a regular speaker at conferences.
Twitter: @ceejbot


adrian-cockroft-microservices-expert

Adrian Cockcroft

Adrian worked at eBay, Sun Microsystems and led the Netflix Open Source program from 2007-2013. He works at Battery Ventures (a VC firm) helping companies with their product development cycles using microservices and continuous delivery.
Twitter: @adrianco


brendan-gregg-microservices-expert

Brendan Gregg

Brendan Gregg is a senior performance architect at Netflix, where he does large scale computer performance design, analysis, and tuning. He is the author of Systems Performance published by Prentice Hall, and received the USENIX LISA Award for Outstanding Achievement in System Administration. He has previously worked as a performance and kernel engineer, and has created performance analysis tools included in multiple operating systems, as well as visualizations and methodologies.
Twitter: @brendangregg


russ-miles-microservices-expert

Russ Miles

Russ has been worked in software for two decades. Now he is the Chief Scientist at Simplicity Itself and author of Antifragile Software.
Twitter: @russmiles


james-lewis-microservices-expert

James Lewis

James is a member of the ThoughtWorks Technical Advisory Board and provides advice to technology and business leaders about web integration, evolutionary architecture, emergent design and lean thinking.
Twitter: @boicy


gregor-elke-microservices-expert

Gregor Elke

Gregor works at codecentric AG and wants to bring Node.js and the corporate world together using microservices for the greater good of both worlds. He is interested in Node.js, lightweight software architecture and „streaming“ data processing.
Twitter: @greelgorke Github: greelgorke


oliver-gierke-microservices-expert

Oliver Gierke

Oliver is the lead of the Spring Data project at Pivotal and member of the JPA 2.1 expert group. He has been into developing enterprise applications and open source projects for over 8 years now. He is into software architecture, Spring, REST and persistence technologies. Regularly speaks at German and international conferences.
Twitter: @olivergierke Github: olivergierke


alexander-heusingfeld-microservices-expert

Alexander Heusingfeld

Alex is a senior consultant for architecture and software engineering at innoQ Deutschland GmbH. He supports customers with his deep knowledge of Java and JVM based systems. Most often he is concerned with the design, evaluation and implementation of architectures for enterprise application integration. Occasional speaker at IT conferences and Java User Groups.
Twitter: @goldstift Github: aheusingfeld


sudhir-tonse-microservices-expert

Sudhir Tonse

Sudhir Tonse manages the Realtime Data Intelligence team at Uber. Previously he worked in the Cloud PLATFORM Infrastructure team at Netflix and was responsible for many of the services and components that form the Netflix Cloud Platform as a Service. Prior to Netflix, Sudhir was an Architect at Netscape/AOL delivering large-scale consumer and enterprise applications in the area of Personalization, Infrastructure and Advertising Solutions.
Twitter: @stonse


paul-osman-microservices-expert

Paul Osman

Paul is a Platform Engineering Manager and leader of the Platform Engineering Team at PagerDuty. His primary interests are distributed systems, APIs and scalable teams.
Twitter: @paulosman Github: paulosman


steven-ihde-microservices-expert

Steven Ihde

Steven is the Director of Service and Presentation Infrastructure at LinkedIn. He joined LinkedIn in 2010 and was a founding member of LinkedIn's Service Infrastructure team. He works on high performance networking, distributed service discovery, web frameworks, and Rest.li, LinkedIn's framework for building REST applications at scale.
LinkedIn: Steven Ihde


david-syer-microservices-expert

David Syer

David is an experienced, delivery-focused architect and development manager. He has designed and built successful enterprise software solutions using Spring, and implemented them in major financial institutions worldwide. He has deep knowledge and experience with all aspects of real-life usage of the Spring framework.
Twitter: @david_syer


douglas-squirrel-microservices-expert

Douglas Squirrel

In the last 15 years Douglas has been CTO at startups in financial services and e-commerce and is currently VP Technology at children's payment-card firm Osper. He has taught 3rd grade, started a one-man business, and performed in comedy sketches. He also advises startup founders and tech leaders.
Twitter: @douglassquirrel


richard-rodger-microservices-expert

Richard Rodger

Richard is the CTO and co-founder of nearForm, a Node.js specialist company in Europe. He is very enthusiastic about open-source projects: he is the author of Seneca.js, a microservices tool kit for Node.js, and nodezoo.com, a search engine for Node.js modules. He is the author of "Mobile Application Development in the Cloud".
Twitter: @rjrodger Github: rjrodger


daniel-bryant-microservices-expert

Daniel Bryant

Daniel is a Principal Consultant for OpenCredo, a software consultancy and delivery company. Currently he specialises in enabling agility within organisations by introducing better requirement gathering and planning techniques and introducing DevOps culturehagy. He is a leader within the London Java Community (LJC), where he acts as a mentor and assists with organising meetups and hackdays.
Twitter: @danielbryantuk Github: daniel-bryant-uk


viktor-klang-microservices-expert

Viktor Klang

Viktor is a passionate programmer who is into concurrency paradigms and performance optimization. He is Chief Software Architect at Typesafe. He's a big fan of agile development, scalable software and elegant code and spent the past 7 years building a EIS, ERP, CRM and PDM system for a large international enterprise.
Twitter: @viktorklang


udi-dahan-microservices-expert

Udi Dahan

Udi Dahan is an expert on Service-Oriented Architectures and Domain-Driven Design and also the creator of NServiceBus, the most popular service bus for .NET.
Twitter: @UdiDahan


stephane-maldini-microservices-expert

Stephane Maldini

Stephane is Software Architect at Pivotal with experience aligning various OSS technologies. He is interested in cloud computing, data science and messaging. He co-founded the Reactor Project to help developers create reactive, low-latency, fast data architectures on the JVM and beyond.
Twitter: @smaldini Github:


greg-young-microservices-expert

Greg Young

Greg is an independent consultant and serial entrepreneur. He coined the term "CQRS" (Command Query Responsibility Segregation) and it was instantly picked up by the community who have elaborated upon it ever since. He's a frequent contributor to InfoQ, speaker/trainer at Skills Matter and also a well-known speaker at international conferences.
Twitter: @gregyoung


jakub-korab-microservices-expert

Jakub Korab

Jakub runs his own consultancy called Ameliant, working in the area of open source integration and messaging. He developed scalable, fault-tolerant and performant system integrations. He is co-author of the “Apache Camel Developer's Cookbook”.
Twitter: @jakekorab Github: jkorab


bert-ertman-microservices-expert

Bert Ertman

Bert is a Fellow at Luminis in the Netherlands. Besides his day job he is a Java User Group leader for NLJUG, the Dutch Java User Group (~4000 members). A frequent speaker on Java and Software Architecture related topics as well as a book author and member of the editorial advisory board for Dutch software development magazine: Java Magazine.
Twitter: @bertertman


james-strachan-microservices-expert

James Strachan

James created the Groovy programming language, Apache Camel and was one of the founders of these open source projects: Apache ActiveMQ, Apache ServiceMix, fabric8 and hawtio. James is currently a Senior Consulting Software Engineer at Red Hat.
Twitter: @jstrachan


brendan-mcadams-microservices-expert

Brendan McAdams

Brendan works at Netflix having previously worked within the Professional Services team at Typesafe. He has made various contributions to open-source projects in the past including building a Linux Driver for the Lego Mindstorms system. At TS he helped Scala, Akka, and Play users better understand and deploy the Typesafe Stack. He also developed and maintained Casbah, the MongoDB driver for Scala, and a connector to integrate Hadoop + MongoDB.
Twitter: @rit


vivek-juneja-microservices-expert

Vivek Juneja

Vivek is an engineer based in Seoul who is focused on cloud services and microservices. He started working with cloud platforms in 2008, and was an early adopter of AWS and Eucalyptus. He’s also a technology evangelist and speaks at various technology conferences in India.
Twitter: @vivekjuneja


stefan-borsje-microservices-expert

Stefan Borsje

Stefan is the co-founder and CTO of Karma: Karma’s product is a mobile WiFi device without the monthly fees and contracts. They use microservices in production for their backend API.
Twitter: @sborsje Github:


tom-watson-microservices-expert

Tom Watson

Tom is the co-founder and CTO of Hubble, an office space marketplace by entrepreneurs for entrepreneurs. He founded Kick Campus to connect talented university students to jobs in startups. They recently switched their architecture from a Django monolith to microservices.
Twitter: @watsontom100


Let's finish the list with Melvin Conway's famous quote:

"Organizations which design systems (...) are constrained to produce designs which are copies of the communication structures of these organizations."

What does it mean? It means that microservices are not just a pattern for your infrastructure - if you want to be successful with them you have to adapt your organization at the first place.

Further reading

Do you miss anyone from the list? Please put her/his name in the comments, we'd like to keep this list up-to-date!

Trace - Microservice Monitoring and Debugging

Trace by RisingStack - Distributed Tracing, Service map, Alerting and Performance Monitoring for Microservices

We are happy to announce Trace, a microservice monitoring and debugging tool that empowers you to get all the metrics you need when operating microservices. Trace both comes as a free, open source tool and as a hosted service.

Start monitoring your services

Why Trace for microservice monitoring?

Debugging and monitoring microservices can be really challenging:

  • no stack trace, hard to debug
  • easy to lose track of services when dealing with a lot
  • bottleneck detection

Key Features

Trace solves these problems by adding the ability to

  • do distributed stack traces,
  • topology view for your services,
  • and alerting for overwhelmed services,
  • third-party service monitoring (coming soon),
  • trace heterogeneous infrastructures with languages like Java, PHP or Ruby (coming soon).

How It Works

We want to monitor the traffic of our microservices. To be able to do this, we have to access each HTTP request-response pairs to get and set information. With wrapping the http core module's request function and the Server.prototype object, we can sniff all the information we need.

Trace is mostly based on the Google Dapper white paper - so we implemented the ServerReceive, ServerSend, ClientSend, ClientReceive events for monitoring the lifetime of a request.

trace events

In the example above, we want to catch the very first incoming request: SR (A): Server Receive. The http.Server will emit a request event, with http.IncomingMessage and a http.ServerResponse with the signature of

function (request, response) { }  

In the wrapper, we can record every information we want, like timing, the source, the requested path, or even the whole HTTP header for further investigation.

In Trace, one of the fundamental features is tracking the whole transaction in microservice architectures. Luckily we can do it, by setting a request-id header on the outgoing requests.

If our service has to call another service before it can send the response to its caller, we have to track this kind of request-response pairs, spans as well. A span always comes from http.request by calling an endpoint. By wrapping the http.request function, we can do the same as in the http.Server.prototype with one minor difference: here we want to pair the corresponding request and response, and assign a span-id to it.

However, the request-id will just pass through the span. In order to store the generated request-id, we use Continuation-Local Storage: after a request arrived and we generated the request-id, we store it in CLS, so when we try to call another service we can just get it back.

Create reporters

After you set up the collector by simply requiring it in your main file:

require([email protected]/trace');  

You can select a reporting method to process the collected data. You can use:

  • our Trace servers to see the transactions, your topology and services,
  • Logstash,
  • or any other custom reporter (see later).

You have to provide a trace.config.js config file, where you can declare the reporter. If you just want to see the collected data, you can use Logstash with the following config file:

/**
* Trace example config file for using with Logstash
*/

var reporters = require([email protected]/trace/lib/reporters');  
var config = {};

config.appName = 'Example Service Name';

config.reporter = reporters.logstash.create({  
  type: 'tcp',
  host: 'localhost',
  port: 12201
});

module.exports = config;  

If you start Logstash with the following command, every collected packet information will be displayed in the terminal:

logstash -e 'input { tcp { port => 12201 } } output { stdout {} }'  

Also, this approach can be really powerful when you want to tunnel these metrics into different systems, like ElasticSearch or just store them on S3.

Adding custom reporters

If you want to use the collector with your custom reporter, you have to provide your own implementation of the reporter API. The only required method is a send method with the collected data and a callback as parameters.

function CustomReporter (options) {  
  // init your reporter
}

CustomReporter.prototype.send = function (data, callback) {  
  // implement the data sending,
  // don't forget to call the callback after the data sending has ended
};

function create(options) {  
  return new CustomReporter(options);
}

module.exports.create = create;  

Use the Trace collector with Trace servers

If you want to enjoy all the benefits of our Trace service, you need to create an account first. After your API Key has been generated, you can use it in your config file:

/**
* Trace example config file for using with Trace servers
*/

var config = {};

config.appName = 'Example Service Name';

config.reporter = require([email protected]/trace/lib/reporters').trace.create({  
  apiKey: 'YOUR-APIKEY',
  appName: config.appName
});

module.exports = config;  

Adding Trace to your project

To use the Trace collector as a dependency of your project, use:

npm install --save @risingstack/trace

Currently, Trace supports [email protected], [email protected] and [email protected].

Trace-as-a-Service

If you don't want run your own infrastructure for storing and displaying microservice metrics we provide microservice monitoring as a service as well. This is Trace:

trace topology

trace view

Check out our tool!

Start monitoring your services

Why You Should Start Using Microservices

This post aims to give you a better understanding of microservices, what they are, what are the benefits and challenges of using this pattern and how you can start building them using Node.js.

Before diving into the world of microservices let us take a look at monoliths to better understand the motivation behind microservices.

The Monolithic Way

Monoliths are built as a single unit, so they are responsible for every possible functionality: handling HTTP requests, executing domain logic, database operations, communication with the browser/client, handling authentication and so on.

Because of this, even the smallest changes in the system involves building and deploying the whole application.

Building and deploying is not the only problem - just think about scaling. You have to run multiple instances of the monolith, even if you know that the bottleneck lays in one component only.

Take the following simplified example:

monolithic application

What happens when suddenly your users starts uploading lots of images? Your whole application will suffer performance issues. You have two options here - either scale the application by running multiple instances of the monolith or move the logic into a microservice.

The Microservices Way

An approach to developing a single application as a suite of small services. - Martin Fowler

The microservice pattern is not new. The term microservice was discussed at a workshop of software architects near Venice in May of 2011 to describe what the participants saw as a common architectural style that many of them had been recently exploring.

The previous monolith could be transformed using the microservies pattern into the following:

microservices

Advantages of Microservices

Evolutionary Design

One of the biggest advantages of the microservices pattern is that it does not require you to rewrite your whole application from the ground up. Instead what you can do is to add new features as microservices, and plug them into your existing application.

Small Codebase

Each microservice deals with one concern only - this result in a small codebase, which means easier maintainability.

Easy to Scale

Back to the previous example - What happens when suddenly your users starts uploading lots of images?.

In this case you have the freedom to only scale the Image API, as that service will handle the bigger load. Easy, right?

Easy to Deploy

Most microservices have only a couple of dependencies so they should be easy to deploy.

System Resilience

As your application is composed from multiple microservices, if some of them goes down only some features from your application will go down, not the entire application.

New Challenges

The microservice pattern is not the silver bullet for designing systems - it helps a lot, but also comes with new challenges.

Communication Between Microservices

One of the challenges is communication - our microservices will rely on each other and they have to communicate. Let's take a look at the most common options!

Using HTTP APIs

Microservices can expose HTTP endpoints, so other services can use their services.

But why HTTP? HTTP is the de facto, standard way of information exchange - every language has some kind of HTTP client (yes, you can write your microservices using different languages). We also have the toolset to scale it, no need to reinvent the wheel. Have I mentioned, that it is stateless as well?

Using Messaging Queues

Another way for microservices to comminucate with each other is to use messaging queues like RabbitMQ or ZeroMQ. This way of communication is extremely useful when talking about long-running worker tasks or mass processing. A good example of this is the Email API - when an email has to be sent out it will be put into a queue, and the Email API will process it and send it out.

Service Discovery

Speaking of communication: our microservices need to know how they can find each other so they can talk. For this we need a system that is consistent, highly available and distributed. Take the Image API as an example: the main application has to know where it can find the required service, so it has to acquire its address.

Useful libraries/tools/frameworks

Here you can find a list of projects that we frequently use at RisingStack to build microservice-based infrastructures. In the upcoming blogposts you will get a better picture on how you can fit them into your stack.

For HTTP APIs:

For messaging:

For service discovery:

Next up

This was the first post in a series dealing with microservices. In the next one we will discover how you can implement service discovery for Node applications.

Are you planning to introduce microservices into your organization? Look no further, we are happy to help! Check out the RisingStack webpage to get a better picture of our services.