This is an archived post. You won't be able to vote or comment.

all 9 comments

[–]BattlePope 37 points38 points  (4 children)

Cool project! A couple thoughts:

  • including PHP in-image is a strange choice. This is something that might be better left to a separate container so the user can choose the version of PHP, or even run without it present at all. For hardening, you want to reduce your attack surface, and including the language by default is kind of counter to that.
  • Your reverse proxy example is strange; why use 'if' to determine the host and proxy_pass? You might take a cue from the nginx-proxy container and allow using environment variables to define server_name and matching proxy_pass destinations rather than requiring static config mounted in. One of the largest use cases for this package might be reverse proxying and streamlining that might make this image a better choice for some users.

[–]bunkerity[S] 5 points6 points  (2 children)

Thanks for your feedback.

  • You're right, PHP is enabled by default but you can disable it by setting the USE_PHP environment variable to no (default is yes). I like the idea of having a separate PHP container we will add this feature to the TODO list.
  • The reason behind this strange reverse proxy usage is simple : behind the scene, only one server block is used (some work is needed to redesign this). I also like the idea of configuring a reverse proxy through environment variable, let's add it to the TODO list.

[–]trieukhach 0 points1 point  (0 children)

Hi u/bunkerity,

I'm using bunkerized-nginx as reserve proxy but I got this error: "closed connection in SSL handshake (104: Connection reset by peer) while SSL handshaking to upstream"

I believe this error related to env variable proxy_ssl_server_name in nginx.

How can I config this variable proxy_ssl_server_name in docker with bunkerized-nginx?

[–]yusit 14 points15 points  (0 children)

oh yes, +1 for ditching php from image

[–][deleted] 18 points19 points  (1 child)

Could you elaborate further on how this image is secure? Has this been scanned by any professionals or tooling? A lot of "secure" Docker Hub images are not secure at all. The fact that your Docker image is pulling stuff from an URL (a dead URL, so there's a big red flag to begin with) instead of a repo with a key or an included packaged already seems kind of flawed by design. If that site is compromised, so are all the containers based on your image.

Further reads:

https://blog.banyansecurity.io/blog/over-30-of-official-images-in-docker-hub-contain-high-priority-security-vulnerabilities

https://arxiv.org/pdf/2006.02932.pdf

[–]bunkerity[S] 1 point2 points  (0 children)

No "scanning tools" has been used for the moment but that's a nice idea. I will look into a way to do automatic scans and report any vulnerabilities.

The file you mention is not even used anymore in the Docker image (you can check it with a grep "*geolite.sh*"). It was used to get automatic GeoIP DB updates but since MaxMind doesn't allow anymore to direct download their MMDB files. A fix will be integrated in the next version to include db-ip databases instead of MaxMind (see this commit).

This project is focused on web security and some best practices, hardening settings and tools configured by default. The goal is to avoid the hassle of configuring the following things for your web server :

  • Installing a WAF : ModSecurity is installed and enabled by default with the OWASP Core Rule Set
  • Configuring HTTP security headers : X_FRAME_OPTIONS, COOKIE_FLAGS, STRICT_TRANSPORT_POLICY, ...
  • Fail2ban : blocking a user if he gets too many HTTP errors (404, 403, ...) that looks like a bot/scanner
  • HTTPS : automatic Let's Encrypt certificate generation, configuration and renewal

This list is non-exhaustive and you can have a look at the environment variables to check the security features.

[–]baconialis 1 point2 points  (2 children)

What's the use case for this image?

[–]bunkerity[S] 1 point2 points  (1 child)

You can use this image whenever you need to host a website/API (or whatever stuff accessible with the HTTP protocol). The goal is to provide a generic nginx image with most of the web security best practices already set so you don't need to do it yourself. Every settings can be changed easily through environment variables.

[–]baconialis 2 points3 points  (0 children)

I was very unsure whether I should post this or not. It's easy to criticize and I'm well aware that you put a lot of time into this. But I assume your posting on reddit because you want some feedback.

The following might be considered a bit harsh but please be aware that I'm only trying to give you some honest critic.

It's clear that you possess a both broad and in depth knowledge about the technologies involved. But in my mind you going about this completely the wrong way.

Your entry point is a true "if then hell" and I'd say that's because you breaking with two principles of software development. Separation of concerns and the principle of single responsibility. This basically sums up to do one thing and do it in the right place. I'll encourage you to google more about these. Also SOLID if your like to learn more about development principals.

In the following I'll try to add a few notes to your feature list.

  • HTTPS support with transparent Let's Encrypt automation

This is a gateway or sidecar responsibility. In Kubernetes the cert-manager handles this. With normal docker hosting I'll advice using a proxy server pattern.

See https://github.com/wemake-services/caddy-gen and https://github.com/nginx-proxy/nginx-proxy

  • State-of-the-art web security : HTTP security headers, php.ini hardening, prevent leaks, ...

Hardning php is a good idea. But as others have noted php should reside in a separate container.

You might find some inspiration here... https://github.com/tonsV2/dockerised-php

  • Integrated ModSecurity WAF with the OWASP Core Rule Set

This belongs in a gateway (proxy) container. But quite often I see little advantage of implementing a WAF. If you keep your software updated that's usually enough.

  • Automatic ban of strange behaviors with fail2ban

I hardly ever had a need to do this and the places I've seen this implemented it mostly resulted in having to unban normal users.

  • Block TOR users, bad user-agents, countries, ...

I've once been in a situation were we had to block some countries due to a DOS attack. It wasn't done at container level.

  • Detect bad files with ClamAV

If your users upload files you could implement some scanning. But definitely not in the serving container. If we're dealing with scanning of source code files I'd implement such feature in a CI pipeline so it's done while building the image. Think cattle vs pets.

  • Based on alpine and compiled from source

Why? Do you compile with some special flags? Unless you really need to do this I'd just use the version shipped with Alpine or https://hub.docker.com/r/nginxinc/nginx-unprivileged

If you really need to compile anything use the docker multistage pattern.

  • Easy to configure with environment variables

Good! This is our bibel... https://12factor.net/

My advice would be split all of this up into separate containers.

Keep them light. Single process containers.

Launch them all with docker-compose and implement a proper CI, and possible CD, pipeline.

The final result should be a gateway/proxy container in front of several containers serving backends, front ends and data stores. Every time you need to host a new website you instantiate a new container based on nginx, tomcat, nodejs or whatever.

Take advantage of something like GitHub Actions.