Software Engineering Asked by Daniel Voina on January 6, 2021
So… The architecture has recently been put under the reign of a Reference Architect.
The Reference architect, I will refer him as RA, started to work and the results were immediately visible: we stopped calling microservices microservices, but blocks.
As we used to be a mixed Java/Linux and C#/.net shop things were working pretty well until the RA issued a decree that no more development shall be done on Java and Linux. We tried to explain that when using microservices (ouch… blocks) the implementation stack is nor very relevant and having multiple stacks gives us more opportunities as there is room for evolution and the costs of Linux stacks is generally lower in cloud the RA sent a cease and desist memo that urged us to go to his favourite vendor.
Apart from the obvious cost and quality related arguments that can be invoked what other could be used so that we can make a case for keeping our code. There is lot of legacy and porting it to a new platform would not bring any business value but a decrease in quality and huge delays.
The arguments used so far are
What am I missing in order to make a compelling case against a single stack? In a microservice architecture shouldn’t the architect be focused more on delivering value, having clear interfaces and mechanisms than on concrete implementations? Microservices could evolve independently and choose their own stack for functional and non-functional reasons, forcing everyone into conformity would lead to degeneration and brittleness. Of course that a single stack seems more economical and promises code reuse, but as far as I know the business domains are pretty different in the two worlds so anyways reuse will be hard.
There are two opposing forces in play here - code base quality vs plausibility (both in terms of available technologies and time and money). Maintaining a homogeneous code base grants a higher general code base quality in terms of maintainability and reusability. Using multiple technologies is more realistic though, a point that you covered in your question. I'd say it's not a question of right and wrong but rather of balance.
What I've seen in reality is that no matter how much you strive to keep all code homogeneous, when a project grows, there always comes time when some integration with an external system introduces a new technology (for Ex. the state health insurance, or a remote 200000-record pharmaceutical database) or the need to use some exotic library (some video-processing module that only has an efficient implementation in pure C).
And there is another principle that I value - don't mess code it with if it works. A micro services architecture serves this principle very well because it introduces isolation of the separate parts and makes it possible to leave the proven services just do their job, allowing developers to focus on new features.
One last thought - in real business environment where every line of code costs capital, changing strategy and rewriting code must be strongly motivated. The decision making direction goes from need towards implementation, meaning that there must be a very clear need to be satisfied (for Ex. there is a bug that is crucial and breaks core user experience) and there must be a very clear reasoning for rewriting the existing code (the bug is related to spaghetti code that cannot be fixed without introducing new bugs, so the spaghetti code must be amputated and replaced with something better) in order to motivate a rewriting decision. Code perfectionism is not a good motivation, as it goes against this direction.
I believe that the term "balance" is the key. Your new architect breaks the balance between cost and code base quality, and this is what is bothering you.
Correct answer by Daniel Apostolov on January 6, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP