Showing posts with label monolithic. Show all posts
Showing posts with label monolithic. Show all posts

Wednesday, May 4, 2016

Micro-monoliths

That really doesn't sound right, probably for years and years we thought monoliths are "bad". They are not bad necessarily, for example, if you are happy with your legacy application, if it is not too hard to maintain or update, if it is still making money... I wouldn't try to change it either.

This is again one of those "size matters" debates and probably there is no absolute right or wrong. In a world of micro-controller instructions, even a few lines of code may seem to be a monolith whereas in a world of enterprise giants it may not even be visible.

But let's focus on something else now. Since we have taken a big step towards improving our codebase, let's focus on improved reusability.

If you are reading this post there's a good chance you have a few "internet identities". Remember the time you got your first free web based e-mail account. You also had to come up with a password that you are not supposed to forget. Then things got complicated and we all ended up with quite a few accounts to remember. It's hard... The solution is simple though, either your client application saves the account information and you don't need to remember a thing or you use a single sign on service so you can limit the number of passwords to remember. Long story short, welcome to the era of services. It doesn't really matter how these services are accessed or consumed for the time being. The main point is, there are things out there, doing things that you would like to. And you can get them do their magic anytime you want.

Applying this logic to our micro-monoliths we'll start seeing thing differently at another scale. Now instead of moving (or maybe packing) our micro-monoliths with our application core (orchestrator maybe?) we may start not deploying them at all or deploy onto a specific set of cluster only. 

VoilĂ  ! We have successfully improved our design and have a basic understading of "micro services".

If you are designing a software application, you are dealing with a micro processor which has something called an "instruction pointer". No, I'll not go into the details of the processor architecture but I'll focus on the phrase itself. It tells us every bit of software has instructions, so the processor can execute them, and a pointer so the processor knows what to do now and what to do next. That is, by definition, a flow.. a workflow if you will.

Since we have our building blocks in place now, let's put them together. We are designing and implementing a workflow and we want to do it fast (time to market), efficient (minimum resources) and maintainable (code aging) to stay competitive.

To achieve our goal, we know that we need to break the problem into smaller problems, make solutions of those smaller problems easy to maintain, at the end of the day our application will be as maintainable as the most complex component of it, make deployment and versioning as managable as possible. It may also help if we could "distribute" the execution of our components across nodes (not computers, because a node is simply a resource that can execute some instructions and not necessarily a computer).

It's clear that by making our micro-monoliths act like services we can achieve them all. So building worklow, besides making thousand of design decisions (smiley goes here), is just putting our "micro-services" together.

Of course, easier said than done.

Next: Stack-oriented programming. 

Thursday, April 28, 2016

Do monolithic apps rule?

Simple answer, yes. Check out the previous post if you do not believe me. 

The problem with monolithic systems, is the headache they may cause once passed a certain threshold. 

Let's have a think about old school programming for a second. Imagine my team needs to write an office suite which will have an imaginary spreadsheet application named Zexcel and a word processing application named Zord. (any resemblance to reality is just coincidence)

We can put everything into MyOfficeSuite.exe and release it right? Or in order to reuse as much as we can and increase maintainability we can have zexcel.exe, zord.exe and some common.dll. We can repeat this process again and again until we have "enough" executables and "enough" dynamic libraries.  In return we get increased maintainability, increased reusability, better versioning etc.

But what we have done is breaking that big monolith into smaller monoliths. Like I said, we can repeat that procedure again and again until .... yeah... until where ?

Well, to be honest, there's no single answer to that question.

That's not a good ending so I'll say a few more things. I would keep on breaking things apart until a single unit of code is deployable and maintainable by itself and can be assigned to a small team of developers.

It needs to be self contained (deployment and maintenance) because you'll need to version it without affecting its dependencies (if any) or  systems depend upon it. And no need to say you may need to shut it down without shutting the whole system down.

It needs to be maintained by a small team of developers, because a large team means there's a good chance we will break the "single responsibility" principle and put a lot of logic into that code block. And It's a team not only because all resources need to have a backup, but also that code block is more than a rather simple function call, that must be properly designed with ensured reliability and reusability. Also it helps with project management obviously.

Let's call these small structures "micro-monoliths". Well they're small so they're "micro" but in a world of micro creatures they're still monoliths.

We have successfully broken the problem into more manageable pieces now. Now instead of a team of developers working towards a big delivery, we have multiple teams that can deliver independently. Also we have better isolation and abstraction which means increased maintainability, easier deployment and versioning. And there's a good chance we now, have smaller technical challenges to face in every team compared to the first scenario.

Let's discuss micro-monoliths in the next one.

Wednesday, April 27, 2016

Monolithic apps rule !

Well, that's a bold one and obviously needs some criticism. Yet, that being said, monolithic apps rule. (And this is recursion)

Because they are easy to "visualize" and as a programmer you need to be able to visualize to be able to create. 

Because they are easy to deploy. I guess the term "monolithic" speaks for itself here.

Because they are easy to debug. Everything you need is packed together after all.

Because I'm sure we can keep adding items to this list.

And it will work great as long as your client base does not grow, or the system is designed to be consumed by one tenant only or if not, all your tenants share the same business flow or .... well you got the idea.

At some point, if you happen to hit one of those barriers then there's a good chance all hell will break loose. You may encounter anything from spending a lot of money on hardware to rewriting (not refactor) parts and pieces (or maybe all of it) of your app from scratch.

This is a common problem many start-ups face, you start with something small that's not very well coded (some call it MVP). And it looks OK with 100 clients making 3 requests everyday. But once you get the investment (even if it is yourself putting the cash in) those numbers ideally, will need to be multiplied by hundreds if not thousands.

That's why if you can come up with a monolithic application idea that doesn't necessarily change from one customer to another, that can be deployed without a headache, that can scale by the client installation, you got a winner.

Well yeah, notepad.exe is still my favorite application.Single threaded, doesn't need too much processing power, can have tens of them running in parallel and can edit any type of text I need.

However, this is not how it works in the real world is it? In real world, most of the time, we deal with services and the rest becomes just another presentation layer. (Unless you are working on the next operating system)
And in the real world there is this incredible thing called the "Internet" and we don't actually know how many clients we are going to have, how many bad guys will try to break the system.

That's a very good reason to rethink our strategy. In a world of a lot of unknowns and parameters that we cannot control we need to ensure our system can be modified, updated, deployed and configured seamlessly.

To be able to achieve that (or a bit of that) we need to step outside of our safe zone and get into the wilderness.