Home Monolith-2
by Manjula Liyanage

Monolith to Microservices: Evolutionary Patterns to Transform Your Monolith

ISBN-13: 978-1492047841

ISBN-10: 1492047848

Go to the Amazon page for details and reviews.

How do you detangle a monolithic system and migrate it to a microservice architecture? How do you do it while maintaining business-as-usual? 

My Notes

Independent deployability is the idea that we can make a change to a microservice and deploy it into a production environment without having to utilize any other services.

in other words, we need to be able to change one service without having to change anything else. This means we need explicit, well-defined, and stable contracts between services. Some implementation choices make this difficult — the sharing of databases, for example, is especially problematic.

If you need to make a change to two services to roll out a feature, and orchestrate the deployment of these two changes, that takes more work than making the same change inside a single service (or, for that matter, a monolith).

One of the things I see people having the hardest time with is the idea that microservices should not share databases.

Don’t share databases, unless you really have to. And even then do everything you can to avoid it. In my opinion, it’s one of the worst things you can do if you’re trying to achieve independent deployability.

microservice-expert James Lewis put it, “Microservices buy you options.”

If we want an architecture that makes it easier for us to more rapidly deploy new features, then leaving the UI as a monolithic blob can be a big mistake.

I am a strong advocate for incremental migration to a microservice architecture.

A multitude of studies show the challenges of confused lines of ownership.

6 I refer to this problem as delivery contention.

Reducing delivery contention:

microservice architecture does give you more concrete boundaries in a system around which ownership lines can be drawn, giving you much more flexibility regarding how you reduce this problem.

The single-process monolith, though, has a whole host of advantages, too. Its much simpler deployment topology can avoid many of the pitfalls associated with distributed systems.

It can result in much simpler developer workflows; and monitoring, troubleshooting, and activities like end-to-end testing can be greatly simplified as well.

If we want to reuse code within a distributed system, we have to decide whether we want to copy code, break out libraries, or push the shared functionality into a service.

I’ve met multiple people for whom the term monolith is synonymous with legacy.

A monolithic architecture is a choice, and a valid one

Coupling speaks to how changing one thing requires a change in another; cohesion talks to how we group related code.

A structure is stable if cohesion is high, and coupling is low. Larry Constantine

cohesion is this: “the code that changes together, stays together.”

we’re optimizing our microservice architecture around ease of making changes in business functionality

A classic and common example of implementation coupling comes in the form of sharing a database.

Treat the service interfaces that your microservice exposes like a user interface. Use outside-in thinking to shape the interface design in partnership with the people who will call your service.

Everything must be deployed together, so we have deployment coupling.

Aggregates typically have a life cycle around them, which opens them up to being implemented as a state machine. We want to treat aggregates as self-contained units; we want to ensure that the code that handles the state transitions of an aggregate are grouped together, along with the state itself.

The aggregate is a self-contained state machine that focuses on a single domain concept in our system, with the bounded context representing a collection of associated aggregates, again with an explicit interface to the wider world.

Microservice architecture in order to achieve something that you can’t currently achieve with your existing system architecture.

When teams own microservices, and have full control over those services, they increase the amount of autonomy they can have within a larger organization.

By being able to make and deploy changes to individual microservices, and deploy these changes without having to wait for coordinated releases, we have the potential to release functionality to our customers more quickly.

By breaking our processing into individual microservices, these processes can be scaled independently. This means we can also hopefully cost-effectively scale

By breaking our application into individual, independently deployable processes, we open up a host of mechanisms to improve the robustness of our applications.

Robustness is the ability to have a system that is able to react to expected variations. Resilience is having an organization capable of adapting to things that haven’t been thought of, which could well include creating a culture of experimentation through things like chaos engineering. For example, we are aware that a specific machine could die, so we might bring redundancy into our system by load balancing an instance. That is an example of addressing robustness. Resiliency is the process of an organization preparing itself for the fact that it cannot anticipate all potential problems.

In many ways, having an existing codebase you want to decompose into microservices is much easier than trying to go to microservices from the beginning.

When you migrate to a microservice architecture, you push a lot of complexity into the operational domain. Previous techniques you used to monitor and troubleshoot your monolithic deployment may well not work with your new distributed system.

And finally, we have the biggest reason not to adopt microservices, and that is if you don’t have a clear idea of what exactly it is that you’re trying to achieve.

Doing microservices just because everyone else is doing it is a terrible idea.

Wait too long, and the pain — and causes of that pain — will diminish. Remember, what you’re trying to do is not say, “We should do microservices now!” You’re trying to share a sense of urgency about what you want to achieve — and as I’ve stated, microservices are not the goal!

Being committed to a vision is important, but being overly committed to a specific strategy in the face of contrary evidence is dangerous, and can lead to significant sunk cost fallacy.

was a really effective way of sharing small, actionable advice.

Don’t throw new technology into the mix for the sake of it. Bring it in to solve concrete problems you see.

Focusing initially on small, easy, low-hanging fruit will help build momentum.

remember that decomposition technique that worked for you in one area of your monolith may not work somewhere else — you’ll need to be constantly trying new ways of making forward progress.

If you do a big-bang rewrite, the only thing you’re guaranteed of is a big bang. Martin Fowler

Some decisions are consequential and irreversible or nearly irreversible — one-way doors — and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation.

When it comes to decomposing an existing monolithic system, we need to have some form of logical decomposition to work with, and this is where domain-driven design can come in handy.

consider coming up with a domain model as a near-essential step that’s part of structuring a microservice transition.

It’s important to understand that what we need from a domain model is just enough information to make a reasonable decision about where to start our decomposition.

We want some quick wins to make early progress, to build a sense of momentum, and to get early feedback on the efficacy of our approach. This will push us toward wanting to choose easier things to extract. But we also need to gain some benefits from the decomposition

Some of the things you thought were easy will turn out to be hard. Some of the things you thought would be hard turn out to be easy.

It’s worth noting that metrics can be dangerous because lf that old adage “You get what you measure.” Metrics can be gamed — inadvertently, or on purpose.

Sunk cost fallacy is also known as the Concorde fallacy, named for the failed project backed at great expense by the British and French governments to build a supersonic passenger plane.

You don’t need to pull out or change course at the first sign of trouble, but ignoring evidence you are gathering regarding the success (or otherwise) of the change you’re trying to bring about is arguably more foolish than not gathering any evidence in the first place.

these questions bear repeating: What are you hoping to achieve? Have you considered alternatives to using microservices? How will you know if the transition is working?

In addition, the importance of adopting an incremental approach to extracting microservices cannot be overstated.

The goal should be incremental creation of new microservices, and getting them deployed as part of your production solution so that you start learning from the experience and getting the benefits as soon as possible.

You learn a huge amount from the process of having your first few services actually used. Early on, that needs to be your focus.

If you do go down the route of reorganizing your existing monolith along business domain boundaries, I thoroughly recommend Working Effectively with Legacy Code by Michael Feathers (Prentice Hall, 2004). In his book, Michael defines the concept of a seam — that is, a place where you can change the behavior of a program without having to edit the existing behavior.

My general inclination is always to attempt to salvage the existing codebase first, before resorting to just reimplementing functionality, and the advice I gave in my previous book, Building Microservices, was along these lines.

Make sure you understand the pros and cons of each of these patterns. They are not universally the “right” way to do things.

Pattern: Strangler Fig Application

Separating the concepts of deployment from release is important. Just because software is deployed into a given environment doesn’t mean it’s actually being used by customers.

Patterns like the strangler fig, parallel run, and canary release are among those patterns that make use of the fact that deployment and release are separate activities.

The strangler fig pattern allows you to move functionality over to your new services architecture without having to touch or make any changes to your existing system.

Pattern: Branch by Abstraction”

There is the oft-stated mantra of “Keep the pipes dumb, the endpoints smart” when we discuss microservice architecture.

embrace the mantra of “smart endpoints, dumb pipes,” something that I still push for.

A content-based routing approach is likely to make more sense as the number of types of consumers increases, although beware of the potential downsides cited previously, especially falling into the “smart pipes” problem.

thoroughly recommend Enterprise Integration Patterns as a great resource here.

When migrating functionality, try to eliminate any changes in the behavior being moved — delay new features or bug fixes until the migration is complete if you can. Otherwise, you may reduce your ability to roll back changes to your system.

Teams that owned these modules all had to coordinate changes being made inside the monolith, causing significant delays in rolling out changes. This is a classic example of the Delivery Contention problem

The app itself is also at this point a monolith: if you want to change one single part of a native mobile application, the whole application still needs to be deployed.

The longer the branch exists, the bigger these problems are.

The branch by abstraction pattern instead relies on making changes to the existing codebase to allow the implementations to safely coexist alongside each other, in the same version of code, without causing too much disruption.

This phase could last a significant amount of time. Jez Humble details the use of the branch by abstraction pattern to migrate the database persistence layer used in the continuous delivery application GoCD (at the time called Cruise). The switch from using iBatis to Hibernate lasted several months — during which the application was still being shipped to clients on a twice weekly basis.

If you want to know more about feature toggles and how to implement them, then Pete Hodgson has an excellent write-up.

When removing the old implementation, it would also make sense to remove any feature flag switching we may have implemented. One of the real problems associated with the use of feature flags is leaving old ones lying around — don’t do that! Remove flags you don’t need anymore to keep things simple.

Both the strangler fig pattern and branch by abstraction pattern allow old and new implementations of the same functionality to coexist in production at the same time.

These results are compared and the “correct” one selected, normally by looking for a quorum among the participants. This is a technique known as N-version programming.7

But we also can (and should) validate the nonfunctional aspects, too. Calls made across network boundaries can introduce significant latency and can be the cause of lost requests due to time-outs, partitions, and the like.

a Spy can stand in for a piece of functionality, and allows us to verify after the fact that certain things were done. The Spy stands in and replaces a piece of functionality, stubbing it out.

A canary release involves directing some subset of your users to the new functionality, with the bulk of your users seeing the old implementation. The idea is that if the new system has a problem, then only a subset of requests are impacted. With a parallel run, we call both implementations.

With dark launching, you deploy the new functionality and test it, but the new functionality is invisible to your users.

Dark launching, parallel runs, and canary releasing are techniques that can be used to verify that our new functionality is working correctly, and reduce the impact if this turns out not to be the case.

All these techniques fall under the banner of what is called progressive delivery

However like stored procedures, database triggers can be a slippery slope.

“Having one or two database triggers isn’t terrible. Building a whole system off them is a terrible idea.”

Restrictions aside, in many ways this is the neatest solution for implementing change data capture.

The main problem is working out what data has actually changed since the batch copier last ran.

Change data capture is a useful general-purpose pattern, especially if you need to replicate data (something we’ll explore more in Chapter 4). In the

case of microservice migration, the sweet spot is where you need to react to a change in data in your monolith, but are unable to intercept this either at the perimeter of the system using a strangler or decorator, and cannot change the underlying codebase.

For a more thorough explanation, see “The Road to an Envoy Service Mesh” by Snow Pettersen at Square’s developer blog. 2 Bobby Woolf and Gregor Hohpe, Enterprise Integration Patterns (Addison-Wesley, 2003). 3 It was nice to hear from Graham Tackley at The Guardian that the “new” system I initially helped implement lasted almost 10 years before being entirely replaced with the current architecture. As a reader of the website, I reflected that I never really noticed anything changing during this period! 4 See Steve Hoffman and Rick Fast, “Enabling Microservices at Orbitz”, YouTube, August 11, 2016. 5 See John Sundell, “Building Component-Driven UIs at Spotify”, published August 25, 2016. 6

It turns out not knowing what you’re doing and doing it anyway can have some pretty disastrous implications. 7

Although it can be a difficult activity, splitting the database apart to allow for each microservice to own its own data is nearly always preferred.

Technically, we can consider a schema to be a logically separated set of tables that hold data,