Simple answer, yes. Check out the previous post if you do not believe me.
The problem with monolithic systems, is the headache they may cause once passed a certain threshold.
Let's have a think about old school programming for a second. Imagine my team needs to write an office suite which will have an imaginary spreadsheet application named Zexcel and a word processing application named Zord. (any resemblance to reality is just coincidence)
We can put everything into MyOfficeSuite.exe and release it right? Or in order to reuse as much as we can and increase maintainability we can have zexcel.exe, zord.exe and some common.dll. We can repeat this process again and again until we have "enough" executables and "enough" dynamic libraries. In return we get increased maintainability, increased reusability, better versioning etc.
But what we have done is breaking that big monolith into smaller monoliths. Like I said, we can repeat that procedure again and again until .... yeah... until where ?
Well, to be honest, there's no single answer to that question.
That's not a good ending so I'll say a few more things. I would keep on breaking things apart until a single unit of code is deployable and maintainable by itself and can be assigned to a small team of developers.
It needs to be self contained (deployment and maintenance) because you'll need to version it without affecting its dependencies (if any) or systems depend upon it. And no need to say you may need to shut it down without shutting the whole system down.
It needs to be maintained by a small team of developers, because a large team means there's a good chance we will break the "single responsibility" principle and put a lot of logic into that code block. And It's a team not only because all resources need to have a backup, but also that code block is more than a rather simple function call, that must be properly designed with ensured reliability and reusability. Also it helps with project management obviously.
Let's call these small structures "micro-monoliths". Well they're small so they're "micro" but in a world of micro creatures they're still monoliths.
We have successfully broken the problem into more manageable pieces now. Now instead of a team of developers working towards a big delivery, we have multiple teams that can deliver independently. Also we have better isolation and abstraction which means increased maintainability, easier deployment and versioning. And there's a good chance we now, have smaller technical challenges to face in every team compared to the first scenario.
Let's discuss micro-monoliths in the next one.
The problem with monolithic systems, is the headache they may cause once passed a certain threshold.
Let's have a think about old school programming for a second. Imagine my team needs to write an office suite which will have an imaginary spreadsheet application named Zexcel and a word processing application named Zord. (any resemblance to reality is just coincidence)
We can put everything into MyOfficeSuite.exe and release it right? Or in order to reuse as much as we can and increase maintainability we can have zexcel.exe, zord.exe and some common.dll. We can repeat this process again and again until we have "enough" executables and "enough" dynamic libraries. In return we get increased maintainability, increased reusability, better versioning etc.
But what we have done is breaking that big monolith into smaller monoliths. Like I said, we can repeat that procedure again and again until .... yeah... until where ?
Well, to be honest, there's no single answer to that question.
That's not a good ending so I'll say a few more things. I would keep on breaking things apart until a single unit of code is deployable and maintainable by itself and can be assigned to a small team of developers.
It needs to be self contained (deployment and maintenance) because you'll need to version it without affecting its dependencies (if any) or systems depend upon it. And no need to say you may need to shut it down without shutting the whole system down.
It needs to be maintained by a small team of developers, because a large team means there's a good chance we will break the "single responsibility" principle and put a lot of logic into that code block. And It's a team not only because all resources need to have a backup, but also that code block is more than a rather simple function call, that must be properly designed with ensured reliability and reusability. Also it helps with project management obviously.
Let's call these small structures "micro-monoliths". Well they're small so they're "micro" but in a world of micro creatures they're still monoliths.
We have successfully broken the problem into more manageable pieces now. Now instead of a team of developers working towards a big delivery, we have multiple teams that can deliver independently. Also we have better isolation and abstraction which means increased maintainability, easier deployment and versioning. And there's a good chance we now, have smaller technical challenges to face in every team compared to the first scenario.
Let's discuss micro-monoliths in the next one.