DevOps Asked by astroanu on November 7, 2021
My company has a mobile app which we release a new update every 2-3 weeks. For each release, there are about 50-60 Jira tickets attached. The amount of code change between the release are very high. We do not follow any CI/CD, so releases happen once or twice every month after approval from CAB.
Is having too many changes in a single release a bad way of release management?
This question as it is posed is quite leading - the use of adjective "bad" to describe what you are doing implies that there is a "good" way. Nobody can answer whether your process is "good" or "bad" but you, and it should be judged on the effectiveness of the process. Perhaps this way of doing things is good for your environment...
However, what we can say is that the situation you describe goes against the principles of continuous delivery and continuous integration.
For example, describing the principle of "work in small batches", the Continous Delivery website states:
In traditional phased approaches to software development, handoffs from dev to test or test to IT operations consist of whole releases: months worth of work by teams consisting of tens or hundreds of people.
This sounds like your situation.
It goes on to state:
In continuous delivery, we take the opposite approach, and try and get every change in version control as far towards release as we can, getting comprehensive feedback as rapidly as possible. Working in small batches has many benefits. It reduces the time it takes to get feedback on our work, makes it easier to triage and remediate problems, increases efficiency and motivation, and prevents us from succumbing to the sunk cost fallacy.
So, by this comparision, yes, your release management is "bad", because it doesn't work in small batches, or continuously. (Again, with the caveat that any process that is executable effectively is a good process).
Another widely-used reference text, the Google Site Reliability Engineering Book has a whole chapter on release engineering. It says:
We have embraced the philosophy that frequent releases result in fewer changes between versions
Your process seems to run counter to this statement, and though you don't specify what level if testing or canary deployments you use, one may surmise from the fact that there is no continuous integration that this is very small, if any. Therefore, by "SRE logic", every new deploy is high-risk, because it contains a large amount of untested changes.
There is another aspect which increases the risk in doing things "your" way. It is not reasonable to expect that a CAB will be able to make an informed decision on whether or not a given build should be released, without feedback on the performance or behaviour of the application in it's target environment (i.e. prod). They will be making decisions based on predictions and assumptions, rather than data.
For all these reasons, and probably many more, I think the consensus of this community would be that the answer to your question would be "yes, this is a bad way of doing release engineering".
A better way would be to:
Smaller, and more deployments mean smaller changes at a time, which are easier to fix, and observing the deployment procedure continuously will give better insight into its reliability and where to improve things.
Automate, measure, improve. Repeat.
Answered by Bruce Becker on November 7, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP