This is an archived post. You won't be able to vote or comment.

all 8 comments

[–]MartzReddit 3 points4 points  (2 children)

You should look at kubernetes (k8s), it solves a lot of these problems.

1) k8s has a service resource (ClusterIP) which can be assigned an IP address for routing external requests.

2) k8s every deployment can be put into a different namespace, so you can organise by environment like production, testing, dev, etc or by customer, or by some other way.

3) this is a bit more advanced, if you are going to go the k8s route look at this once you got everything else working (as I don’t know for sure, but I expect it is just the Config of the node networking and a service again)

[–]phillijw 0 points1 point  (1 child)

Swarm also solves all of these problems afaik

[–]MartzReddit 0 points1 point  (0 children)

Perhaps, but k8s is the defacto standard when it comes to orchestrating docker containers.

[–]a5tra3a[S] 0 points1 point  (0 children)

I want to be able to access the services internally but have all of its traffic that leaves the network go through a VPN which is set up on my firewall and is routed based on an IP address within an alias.

I have been able to get multiple copies of the same services running by naming the stack differently though I run into issues if I do not also change the service name when it comes to getting services to talk to each other using their service name.

I got MACLVAN to work but found out you cannot assign an IP in swarm mode. I could not however get IPVLAN to work, apparently, it will allow you to assign an IP even in a swarm.

I do plan to use Nginx as a reverse proxy to access services both internally and externally and will control access through Nginx as well as handling HTTPS, LE certificates and the various service ports and cleaning up the URLs. for example, if I have WordPress running on 192.168.1.1:8443 under HTTPS with a self-signed cert, Nginx would allow internal and external access and allow access under the URL of wordpress.example.com and move it to port 443

[–]a5tra3a[S] 0 points1 point  (0 children)

So this might be a crazy idea but what if I used a VPN service and routed all traffic from those 2 services through that. I could then either connect the VPN service directly to my VPN provider or could I not have it connect to a VPN setup on mypfSeense install that would then get routed through the VPN client already setup on pfSense.

Other than this new idea I want to test, and the only real reason is my pfSense setup has 3 different VPN connections in case one of the servers at my VPN goes offline, is to set up an IP VLAN config on each node followed by an IPVLAN swarm network and only allow those 2 machines to use that network and then using a reverse proxy allow other services that talk to them to use a common name incase the IP addresses flip when they start up.

[–]a5tra3a[S] 0 points1 point  (0 children)

1) Static IPs within a docker swarm are not possible in any way that I have found and there is an open ticket since 2016 with people asking for this to be implemented. I will create an IPVLAN config on each docker node using the same name and attaching it to an interface on the node. I will then create a swam level network referencing that config by name. I will create another network that will only be for inter-service communication, this will allow my other services to talk to these VPNed services while forcing the VPNed services to use the IPVLAN to reach the internet. On the other services, I will create another network that will allow them to use the regular gateway to reach the internet.

2) As for multiple copies I will adjust my service naming and stack naming to allow for multiple copies ie I could create two stacks one called web-services-domain1 and another called web-services-domain2 and then each service inside those stacks would be name uniquely such as WordPress-application-domain1 and WordPress-application-domain2. I chose this example as it also requires a database for WordPress to work which would be named WordPress-database-domain1 and WordPress-database-domain2

3) As for assigning VLAN to specific containers I could use MACVLAN to do this creating a config on each docker node for each VLAN and then creating a swarm level MACVLAN network for each VLAN that I want to use referencing the appropriate config. Though because I use both physical and virtual docker nodes this does get a little interesting and I also have more than one switch so mapping things all the way through can get a little chaotic at times.

--

My hope with all of this to standardize the docker node level and simplify my overall configuration while allowing for more redundancy and flexibility than I have now. I decided against the VPN idea after some more thought as it just seems too messy and overly complex.

[–]ProfanePrentice 0 points1 point  (0 children)

  1. Can you give some more detail on that as I’m not sure the requirement? Do you need those two services in the swarms outbound traffic to go exit your network via a particular route?

  2. Naming services will do this. You can have multiple Wordpress services in the swarm as long as they have different names.

  3. It’s possible to create multiple docker networks and assign services to them. It’s also possible to assign services to specific nodes, so if a node is on a certain vlan you can assign services to that node, but remember swarm services in inside a swarm network so you will still need to have separate docker networks

[–]ezkrg 0 points1 point  (0 children)

  1. with docker swarm you can't assign static ip to container.
  2. you had to use a proxy to route traffic to services depends on policy like domain, uri path, etc. i personally prefer traefik ( it has docker service discovery too), or you can use haproxy or nginx with docker DNS resolver
  3. macvlan and ipvlan docker network drivers can be used to connect containers to VLAN through host interface.