Replies: 1 comment
-
In general, I would keep managers off of the security group for incoming traffic and dedicate worker nodes to receive incoming traffic. This all depends on your swarm size and design, which I go into in this video. If you want to avoid http reverse proxies in your swarm and you want to run multiple http URL's to the public, then you can use AWS ALB's, which you can configure host headers for, and route those to your swarm on specific published ports. Each swarm service that is published for http would need a unique port, and the ALB's can point to those ports on a specific set of workers that you designate in ALB's for incoming traffic. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have been trying to find documentation on how to configure one of the cloud load balancers (Google or AWS) to work with Docker Swarm.
On kubernetes, what happens is that when we create a "Loadbalancer" service, it will create an ELB automatically for the service...but nothing like this is available for Swarm.
Also I must mention I'm trying to eliminate nginx entirely. So let's say I have www.service.com and staging.service.com in the same docker Swarm cluster. The service is scaled to two replicas and there are two worker nodes.
In this situation, how does one create a load balancer that works with Swarm ? Does the load balancer direct all requests to the master and let the ingress do the routing ? Not sure if that will even work.
Or does the lb need to know the nodes in the cluster and will route to the published ports for each service directly? In this case, if I scale up the Swarm cluster...I will need to manually update the lb rules.
A but confused on the best practice here.
Beta Was this translation helpful? Give feedback.
All reactions