That really doesn't sound right, probably for years and years we thought monoliths are "bad". They are not bad necessarily, for example, if you are happy with your legacy application, if it is not too hard to maintain or update, if it is still making money... I wouldn't try to change it either.
This is again one of those "size matters" debates and probably there is no absolute right or wrong. In a world of micro-controller instructions, even a few lines of code may seem to be a monolith whereas in a world of enterprise giants it may not even be visible.
But let's focus on something else now. Since we have taken a big step towards improving our codebase, let's focus on improved reusability.
If you are reading this post there's a good chance you have a few "internet identities". Remember the time you got your first free web based e-mail account. You also had to come up with a password that you are not supposed to forget. Then things got complicated and we all ended up with quite a few accounts to remember. It's hard... The solution is simple though, either your client application saves the account information and you don't need to remember a thing or you use a single sign on service so you can limit the number of passwords to remember. Long story short, welcome to the era of services. It doesn't really matter how these services are accessed or consumed for the time being. The main point is, there are things out there, doing things that you would like to. And you can get them do their magic anytime you want.
Applying this logic to our micro-monoliths we'll start seeing thing differently at another scale. Now instead of moving (or maybe packing) our micro-monoliths with our application core (orchestrator maybe?) we may start not deploying them at all or deploy onto a specific set of cluster only.
VoilĂ ! We have successfully improved our design and have a basic understading of "micro services".
If you are designing a software application, you are dealing with a micro processor which has something called an "instruction pointer". No, I'll not go into the details of the processor architecture but I'll focus on the phrase itself. It tells us every bit of software has instructions, so the processor can execute them, and a pointer so the processor knows what to do now and what to do next. That is, by definition, a flow.. a workflow if you will.
Since we have our building blocks in place now, let's put them together. We are designing and implementing a workflow and we want to do it fast (time to market), efficient (minimum resources) and maintainable (code aging) to stay competitive.
To achieve our goal, we know that we need to break the problem into smaller problems, make solutions of those smaller problems easy to maintain, at the end of the day our application will be as maintainable as the most complex component of it, make deployment and versioning as managable as possible. It may also help if we could "distribute" the execution of our components across nodes (not computers, because a node is simply a resource that can execute some instructions and not necessarily a computer).
It's clear that by making our micro-monoliths act like services we can achieve them all. So building worklow, besides making thousand of design decisions (smiley goes here), is just putting our "micro-services" together.
Of course, easier said than done.
Next: Stack-oriented programming.
Wednesday, May 4, 2016
Micro-monoliths
Tags:
distributed
,
microservice
,
monolithic
,
SOA
Subscribe to:
Posts
(
Atom
)