DevOps Asked by Aerodynamika on August 22, 2021
Suppose I have two apps launched via the AWS ECS cluster (using Docker containers).
I want to expose one app to the world via a public IP (and I do it via the AWS load balancer) but the other one I want to be able to access only internally, so that it would not have any public IPs and would only be accessible internally.
Is this possible to do that at all? I suppose it should be easier via docker containers because I could possibly make them communicate to each other by exposing a localhost via
--network="host" in docker run
But that would work if I run the two apps on the same EC2 instance.
What if I run them on separate instances but they are using the same load balancer or — separate instances but in the same AWS zone?
What setting would I use in ECS to expose this app only via the localhost?
First, @Meir beat me to the punch on awsvpc
network mode, so give him a +1 and read his linked document about task networking.
I'm going to expand on that and include an description of how to route to your containers from both a public-facing ALB, as well as from another container.
To simplify things, we're going to pretend we're running on ECS Fargate to start, and at the end I'll point how that would be different for EC2.
You described two ECS services: one that is public-facing behind the ALB, which we'll call frontend, and the other, which communicates to other containers internally, we will call backend.
Here's what frontend that might look like in Terraform:
resource "aws_ecs_task_definition" "frontend" {
family = "frontend"
network_mode = "awsvpc"
requires_compatibilities = ["FARGATE"]
execution_role_arn = "" # generic
container_definitions = "[]" # not included in example
}
resource "aws_ecs_service" "frontend" {
name = "frontend"
cluster = "my-ecs-cluster"
task_definition = aws_ecs_task_definition.frontend.arn
desired_count = 2 # high availability
launch_type = "FARGATE"
network_configuration {
security_groups = [] # SG allowing traffic from the loadbalancer to port 8080
subnets = [] # list of private subnets
}
load_balancer {
target_group_arn = "" # an aws_alb_target_group
container_name = "frontend"
container_port = 8080
}
}
backend would look similar, except instead of a load_balancer
block it would have this:
service_registries {
registry_arn = aws_service_discovery_service.internal.arn
}
And these new resources:
resource "aws_service_discovery_private_dns_namespace" "internal" {
name = "internal.dev"
vpc = "" # a vpc id
}
resource "aws_service_discovery_service" "backend" {
name = "backend"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.internal.id
dns_records {
ttl = 10
type = "A"
}
}
}
Service Discovery creates and manages DNS records for the private IPs (ENIs) of your backend tasks automatically. That means backend.internal.dev
now resolves to one of those IPs in your VPC CIDRs, something like 10.10.0.60
.
Now when frontend wants to connect to backend, it can do so via backend.internal.dev
. No internal ALB required! Your entire VPC can resolve that domain.
"Okay, but how does this work on EC2?" you might ask. Well, that's the beauty of awsvpc
network mode... it works the same way! These task ENIs are completely different network devices than your EC2 instance's ENI. You will need to modify the Terraform slightly, and perhaps review your security groups, but ultimately nothing has changed: the ALB still connects to the frontend task ENI, and the backend task ENI is still registered in service discovery. You can run multiples of the same container on a single EC2 instance without port collision because they are using different network interfaces.
References:
https://aws.amazon.com/blogs/aws/amazon-ecs-service-discovery/ https://aws.amazon.com/about-aws/whats-new/2019/06/Amazon-ECS-Improves-ENI-Density-Limits-for-awsvpc-Networking-Mode/
Answered by Woodland Hunter on August 22, 2021
Regardless of instance count, you can publish only the port of publicly available app by adding security rule which will allow external traffic into that port's bound host's port, and for the second one just bridge to the first app
Answered by Hakob on August 22, 2021
Have you tried using awsvpc
mode? https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
You can follow this tutorial - https://docs.aws.amazon.com/AmazonECS/latest/userguide/create-public-private-vpc.html
You need a VPC, with at least two subnets, one private and one public. After that the sky is the limit ...
If you need internet access from the private subnet, create a NAT Gateway in the public subnet, and route traffic from the private subnet to 0.0.0.0, via the NAT Gateway - https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
P.S - I never use network mode host
with ECS, you should read more about it here - https://docs.docker.com/network/host/ I didn't find a strong use case for using it, except for testing purposes
Answered by Meir Gabay on August 22, 2021
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP