TransWikia.com

Monolith to microservices - Staging / UAT environments

Software Engineering Asked by stevanity on February 28, 2021

In our organization we’re looking to adopt a service oriented architecture where new requirements (that are natural bounded contexts) are being built as separate services that integrate into the main monolith as APIs & sometimes frontends as well (iframes today, we’ll use some form of webcomponents later on).

Today our monolith dev pipeline has few fixed non-prod environments (qa, staging, demo, etc). As we build separate services, is the best practice to have these services be part of these non-prod environments?

The other option I see is that we could just staging & production for these smaller services and point all non-prod monolith environments to the staging environment. But then the issue becomes the single staging environment needs to be built to isolate & handle data from different monolith environments.

I understand that the cleanest and safest approach is to have all services (including the monolith) to be running together in any & all environments. But then coordinating between the folks responsible for these services also becomes necessary right?

Also, we’re looking to provide developers their own on-demand preview environments. In such cases as well, do we have to have all these other services spun up?

Before we take a short-cut approach to this problem, I just wanted to get an idea of unforeseen consequences that others might have faced in their teams.

2 Answers

While things are hybrid

If your monolith is difficult to deploy, requires a lot of resources, or has licensing costs even for development environments; then you'll need to limit the number of instances where the monolith lives. Your option is to create a simulator to manually do things the monolith would do so you can test the microservices, or directly manipulate data stores using your tests.

However, if your monolith is pretty easy to deploy, you can automate it's deployment as well. It would be a "bigger" service, so to speak.

Ideal Environment

You'll find that one of the key factors of success in microservices is automating the deployment as much as possible. Whether you use Chef, Salt, TerraForm, CloudFormation, containerize with an orchestration layer like Kubernetes is a choice you'll have to decide on.

There are many ways of solving this problem, but the heart of it is that you need to make your deployment and configuration as automated as possible. Some of the ways to make that easier include:

  • Externalizing configuration: The deployment system pushes the configuration to the services
  • Service Discovery: Either use a dedicated service discovery component, or leverage your infrastructure (i.e. DNS entries or some of the many ways that Kubernetes makes it easier to find a service)
  • Protect Secrets: Secrets like usernames and passwords, or client id and secret ids for OAuth 2 authentication shouldn't be passed in the clear. You can leverage your externalized configuration if you have a means of encrypting and decrypting on the fly.
  • Continuous Integration/Continuous Delivery: Every commit to the right branch or tag should build and deploy the software to the right environment.

By having deployment as part of the whole process, your question answers itself. Once you've gone through the hassle of automating the deployment and configuring the different environments appropriately, why wouldn't you just deploy your service when changes are made in all environments? That makes automated and ad hoc testing easier to do.

Team Responsibilities

When you talk to large software teams like Amazon (the store front), NetFlix, AirBnB, etc. there is a common mantra. I.e. there should be one team per service. That team is responsible for everything related to the service, including deployment, testing, recovery testing, monitoring, etc. The teams would ideally be a 2 pizza team (roughly 5-8 people depending on how hungry they are).

For smaller development teams like mine, that's just not something that most companies have the luxury of doing. For example, I have two teams working for me, integrating and building out two applications into one. To handle coordination efforts, we do incorporate planning meetings with our normal scrum cadence.

  • Every week we have a Scrum of Scrums, where each team lead works with the architect (me in this case), and business folks so we can resolve issues related to technology, schedule, or business priorities.
  • At release planning we identify the areas we need to coordinate more tightly.

Our releases are typically 4 sprints worth of work, and then deployment to production. However, our customer has a lot more bureaucracy to release software than if we were a commercial group. Your experience may likely be different. However, I can say from experience, the more often you deploy the more critical automating that deployment is.

Correct answer by Berin Loritsch on February 28, 2021

I wrote down a 6 minutes blog-post dedicated specifically to this subject. I'll break it down to a few points here (essentially, it's a TL;DR) and add a few points related to the author's question.

  • Production is the most important environment and it should be treated as such - Deploying to Production must be tested in all possible aspects, to decrease the risk of downtime and unforeseeable consequences (relating to the author's statement).
  • "...Developers on-demand preview environments..." - Doing so doesn't mean you have to deploy the whole "real" infrastructure per developer. Developers need an environment to test their code without affecting their colleagues, that's the main goal for having "on-demand environments". By not deploying the "real" infra I mean - spinning up a Kubernetes cluster per developer, which includes all the relevant resources such as webserver (nginx, Apache), database (MySQL, PostgreSQL), and so on, is very common and recommended. The real challenge would be populating the database with relevant data and getting secrets/parameters from the Development environment, and if you have a good DevOps team, they should be able to do it with no problem.
  • We are human beings - You don't want one of your DevOps engineers to apply a change in Production by accident, just because they thought they're in the Development environment. This is also called "the confused deputy problem".
  • Multiple environments means more money (NOT) - It's important to take into account the costs of separating environments into different accounts. For example, if you're using AWS Web Application Firewall (WAF), it's possible to attach the firewall to resources of both dev and stg. Separating dev and stg into different accounts means you'll need to create the WAF resource in both accounts, hence pay for it "twice". It all comes to the question of what will cost more, potentially paying more for resources that could have been shared between dev and stg, or having unpredictable deployments to prd that might result in unwanted downtime. Once you answer this question, you'll know if you're willing to separate stg from dev.
  • (Optional) Regulations - if your organization is required to meet a regulation such as GDPR or HIPAA, then splitting environments to separate accounts increases your chances of fulfilling the requirements to meet those regulations. Of course, there are many things that need to be considered when aiming to meet regulations, and separating environments is usually the first thing that you do (based on past experience). It's different for each organization, but this task was always the first thing on the table.

There are many more reasons why you should separate your environments, but I think that's enough for now. I hope it helped.

Answered by Meir Gabay on February 28, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP