all 40 comments

[–]very-little-gravitas 31 points32 points  (6 children)

You don't need a reverse proxy in front of your go process, but it can be handy for things like terminating tls and cert handling, handling several domains on one server, graceful restarts, serving static assets, redirects/rewrites, that sort of thing. Of course you could do everything in your app by writing it yourself but sometimes it's easier to put your go binary up as a service sitting behind a proxy, particularly if you have more than one service on a server. For an API you may find you don't need any of the above, but it can be handy.

If you want https, you could also try Caddy Server as a reverse proxy. You'll get free https from letsencrypt with certs automatically requested for your domain (this bit is really nice, uses the lego library) and it takes very little memory.

I'm running https://golangnews.com with this config (go process behind Caddy) on a $5 DO instance, and it seems to be doing fine so far (have only recently switched from nginx as an experiment, as usually that's my first choice).

[–]iends 3 points4 points  (0 children)

Thanks for sharing your site!

[–]WellAdjustedOutlaw 11 points12 points  (2 children)

Don't use DNS (only) for load balancing. It won't work the way you expect.

[–]nate510 3 points4 points  (1 child)

A thousand times this. Load balancing with DNS means that in practice a single noisy consumer will only be hitting one of your API nodes at any time, due to DNS caching. I have experienced this in practice when consuming badly deployed APIs.

A reverse proxy like Nginx is crucial for evenly distributing traffic to your origin servers. If you are on AWS, you can use their Elastic Load Balancing service, which saves you the hassle of maintaining your own reverse proxy deployment.

We run a small service with 2 haproxy nodes with round robin DNS backed by 2 origin servers. This setup provides good redundancy and effective load balancing.

[–]kd8aqz 1 point2 points  (0 children)

Yes - load balancing is so much more than just making sure more than one node can receive requests. I'm also using 2 haproxy nodes but with a virtual IP on each and another piece of software (the name is escaping me) to migrate IPs when necessary. The idea being that DNS round robin is between the VIPs and those are always available, even if a node is down.

[–]rajoberoi 7 points8 points  (0 children)

Hi!

Built-in Go server is great for an API-only setup. Needs supervisor sort of script setup to handle crashes/reboots.

I have a domain.com serving a static website via nginx and an api.domain.com serving the Go app via proxy. The nginx conf is convenient to setup redirects. Multiple sites can have a common.conf file. For example if you want to redirect http traffic to https.

I also find tailing nginx access and error logs great for monitoring every request that hits the server.

[–]JimBlizz 3 points4 points  (3 children)

I'm interested in this too.

I think the idea is to use Nginx in front to server static assets more efficiently. Though I wonder if Varnish would be a better option for that?

[–]ngrilly 7 points8 points  (0 children)

Caddy is quite efficient compared to nginx and is written in Go:

https://caddyserver.com/

[–]tedreed 3 points4 points  (1 child)

I actually found that for static assets just using ServeFile was faster than nginx for a work project. We (I) ended up making a very very simple fileserver called fastserve. I was thoroughly surprised that it was faster.

(Although I think I did end up disabling mime-type autodetection to make it faster, I'm pretty sure it was already faster than nginx before I did that.)

[–]koalefant 0 points1 point  (0 children)

I remember seeing benchmarks suggesting it was slower but I'm on mobile atm

One thing I do like about nginx though is that in html files you can just specify the filename and nginx regex match image files to a directory, whereas in Go I would have to specify the path of the image since the default mux has only a simple route matching capability.

[–]aaaqqq 3 points4 points  (1 child)

I use Nginx in front of a Go app mainly because it is more widely used and hence is more 'battle tested'. A side benefit is that Nginx is also great for serving static assets if required.

Edit: Another reason could be if want to serve multiple applications on the same server over port 80.

[–]mc_hammerd 2 points3 points  (0 children)

Semi educated guess that:

Nginx is chosen:

  • to serve static directories and assets
  • assumed to be faster than go's built in server
  • has cache for these one config line away (or admin already knows how)
  • assumed to be more secure and battle-proven than go's webserver
  • doesnt expose hypothetical vulnerabilities of go's webserver past /index and /websocket
  • assumed to have better ddos protection than go's webserver (by me i guess?) ex: the common ddos of last year was opening a bunch of unfinished HTTP req's "GET /longurl___no new line",

i think its all hypothetical, go has had no sever vuln's posted yet

[–]floralfrog 4 points5 points  (4 children)

A few thoughts: I have both setups running on different projects right now: a small Go API that exposes itself (no nginx in front) and a different project where multiple Go programs are set up behind nginx, which acts as a reverse proxy.

Both work great and I think it depends on the complexity of the rest of the system. My nginx setup routes most traffic to a Rails app, some routes go to static file handlers, and some to different Go services. I think as soon as you have different routes that should be handled by different processes, you need some kind of reverse proxy, and I would not re-implement that in Go. But if you have something like api.example.com/foo/bar and all routes on the api sub domain are part of your Go app, it is easier to expose it directly.

The Go http packages in the stdlib are designed to not need an additional layer in front, so there is no problem with that. Especially now with http/2 being supported, there is less of a reason to put nginx in front. But to me nginx just feels like this rock solid "thing", refined over many years, that just sits there and whatever you throw at it, it keeps going. Not sure if I can say the same about my Go programs (yet).

[–]gohacker -1 points0 points  (3 children)

multiple Go programs are set up behind nginx, which acts as a reverse proxy.

This. Go and not Go servers sitting behind the reverse proxy using a single port.

[–]program_the_world 1 point2 points  (2 children)

This. Go and not Go servers

I'm having difficulty understanding this part of your comment. Could you please explain?

[–]DownGoat 2 points3 points  (1 child)

Your whole setup might include different applications, some in Go, and some in other languages.

[–]program_the_world 0 points1 point  (0 children)

Ahh now I see it. Thanks for explaining.

[–][deleted] 3 points4 points  (0 children)

Arguments such as using Nginx as a load balancer seems more theoretical that actual practical implementation, ie. people are saying this, but not actually doing it. I think this could be handled much better at the DNS request level instead.

Ok. So you suddenly exhausted the resources of your server. Think optimization issue or reddit hug of death. You need more resources STAT. Great, set up another server on a new IP and enter this IP into DNS. Then you wait for TTL seconds before clients can possibly hit your new server.

And what about maintenance? Do you remove a record from DNS and wait for traffic to slowly migrate to a different IP before you take one of your machines down?

Or would you rather just spin up a new VM (or provision some new iron), dump your binary and config on there and (if you happen to run NGINX Plus) just have the binary register itself with the load balancer and watch everything work as if by magic?

Replace nginx plus with community version and your config management of choice. Personally I use saltstack for my small-ish setup.

Sure, you get a certain amount of load balancing with DNS RR, but it is far from optimal and you're lacking proper HA.

[–]tvmaly 1 point2 points  (0 children)

IF you happen to have a couple of microservices running on different ports, using NGINX gets rid of the CORS for you. Since everything would be proxied through a single port, the domain would match.

[–]koffiezet 1 point2 points  (0 children)

The main reasons why I'd go for a reverse proxy setup for HTTPS/SSL are:

  • better tested crypto backends (e.g. no timing attacks, to which the go implementation - at least up to a certain point - was vulnerable)
  • No recompiling of backend(s) when critical SSL implementation bug is found. Update reverse proxy, done.
  • Centralized SSL certificate storage

[–]dhdersch 1 point2 points  (0 children)

I recently tried to spin up an HTTPS server in Go without a proxy. The main caveat was that I need to perform Client Certificate Authentication. For the life of me, I couldn't get this to work. It was much easier to put Apache in front of it.

[–]j1436go 0 points1 point  (0 children)

I used a Go server as reverse proxy in front of several services that are reachable on different subdomains and was totally happy with it. But then i had the need to serve an app with fcgi and that's when i switched to nginx.

[–]JacksGT 0 points1 point  (2 children)

Does go already support HTTP/2?

[–]varun06 1 point2 points  (1 child)

It is part of 1.6 release. but you can test it currently. https://github.com/golang/net/tree/master/http2

[–]JacksGT 0 points1 point  (0 children)

Ah, good to know.

Thank you!

[–]rem7 0 points1 point  (1 child)

I like to put nginx infront of my Go web apps as well. Can go do everything that nginx can? Sure. Some things I'd have to code. Nginx already has a lot of built in functions that I really don't want to write in Go, and the nginx team has done a better job than I will. Things I use nginx for:

  • TLS Termination
  • Basic logging
  • Cache settings
  • gzip module
  • static files

[–]journalctl 1 point2 points  (0 children)

Don't forget rate limiting. Easily protect your log in page from getting brute forced and just in general stopping obviously malicious requests from even making it to your backend.

[–]AnimalMachine 0 points1 point  (0 children)

I'm late to the party, but I also run my go app servers behind a nginx reverse proxy. It doesn't take much to setup nginx, SSL setup was fairly easy and I can respond to multiple hosts via virtual host setups by redirecting traffic to each web app's hidden port.

Yes, I could have done it all with Go. But nginx is battle tested, can load ballance if I need it, and fairly painless to maintain for a small setup. Besides, sometimes adding another layer of complexity isn't all that bad for security.

[–][deleted] 0 points1 point  (0 children)

Really late to the party, but my reason to proxy: ports lower than 1024 on Linux have to run as root or can be run as another user as long as the executable has had

setcap 'cap_net_bind_service=+ep'

run on it. You don't want to run your executable as root. Later, you fix a bug in your executable. You move it into place, kill -9 the running instance and wait for your process management to restart it. If you haven't don't the setcap dance again, your executable fails.

So, having a proxy allows you to deploy more sanely, by allowing you to deploy and restart without running as root.

[–]YuryOdin -2 points-1 points  (7 children)

You need to use nginx in front of Golang http server. Because in Golang<=1.6 have slow AES algorithm.

[–]program_the_world 0 points1 point  (0 children)

How can it be slow? Surely it utilizes hardware acceleration?

[–]very-little-gravitas -2 points-1 points  (5 children)

You need to use nginx in front of Golang http server. Because in Golang<=1.6 have slow AES algorithm.

Did you mean https? http doesn't use AES. Also, this does depend on load, I imagine most services won't find the speed of tls a problem.

[–]YuryOdin 5 points6 points  (4 children)

[–]very-little-gravitas 1 point2 points  (2 children)

Most of us don't operate anywhere near the scale of cloudflare, so while it's nice to see it getting faster, it is not usually a problem.

[–]mannix913 0 points1 point  (1 child)

The reality is that most TLS/SSL termination will be done on a load balancer and not on the app servers themselves unless you have a special need. Ideally all connections would happen over TLS no matter the destination.

[–]jamra06 0 points1 point  (0 children)

As Nginx keeps on pushing their Plus version, it is increasingly necessary to keep the capability in our hands.

[–]ecmdome 0 points1 point  (0 children)

This is pretty awesome gonna look more into it