I see so many posts and people who run NGINX as their reverse proxy. Why though? There’s HAProxy and Apache, with Caddy being a simpler option.
If you’re starting from scratch, why did you pick/are you picking NGINX over the others?
Nginx handles more connections than Apache, given the same resources. HAProxy does not have web server functionality like the former two, so Nginx is the natural upgrade from Apache. Caddy is relatively new, I’m not sure how it compares other than being easier to set up.
Nginx was the easiest to setup for me at the time and I’ve no reason to fix what isn’t broken.
Nginx “just works™” had never got into the way, its been rock solid and has not changed significantly over the years.
Why would I need something else?
HAproxy cannot serve static files directly. You need a webserver behind it for that.
Apache is slow.
Nginx is both a capable, fast reverse-proxy, and a capable, fast webserver. It can do everything HAproxy does, and what Apache does, and more.
I am not saying it is absolutely best for every use-case, but this flexibility is a large part of why I use it in my infra (nad have been using it for a decade).
Counter question: Why does everyone call it “engine X” and not “enjinx”, which would be the way cooler pronunciation?
oh I’ve only ever read it, never heard it pronounced, do people really pronounce it engine X?
That is the way it’s pronounced, yes.
Huh. That is way cooler
Directly from the nginx home page:
nginx [engine x] is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server, originally written by Igor Sysoev.
I always said “in GIN icks” (gin like the alcohol) based on someone else’s pronunciation years ago. I never realized it was meant to have anything to do with “engine” as a result.
I call it N Jinx. Always have and I’ll never be convinced otherwise that it’s not.
It’s actually n-jynx duh
I always did? A friend pointed out to me once the “correct” pronunciation. I like this way more.
Security
Caddy is good but it tried to do to much. This means that security bugs could be way more common. It has been audited by outside people and the issues they found were fixed but I am will very doubtful that it is secure yet
I think security is a fair point, given caddy’s younger age compared to nginx, but I wouldn’t say it tried to do too much.
I use Traefik because it solved a problem with the static configuration approach which Nginx had / still has.
In a scenario with multiple services behind Nginx, taking one down or replacing an instance is massive headache. I tried to script around it, but basically the Nginx container would choke on the fact that a service does not [yet / anymore] exist, and together with the docker networking stack it turned out to be an insurmountable problem.
Traefik otoh discovers services based on (in my case) labels on the docker containers running locally. And then updates the configuration on the fly.
Basically the static approach to configuration resulted in massive headache when I needed to enable zero downtime deployments and updates behind Nginx. And Traefik handled it perfectly without dropping a single request.
Nowadays I manage my dynamic configuration with ansible and update the values in for the file-based configuration provider with a playbook. I don’t need a UI to manage my inventory, I use ansible for that. Traefik handles the rest perfectly.
Completely agree, which is why I do the same.
Additional bonus: proxies that interact with the docker API directly (I think also caddy can do it) save you from exposing the services on any port at all (only in the docker network). So it’s way less likely to expose a port with a service by mistake and no need for arbitrary and unique localhost ports.
When NGINX showed up it beat the then dominant apache on resource utilzation hands-down.
It’s also very configurable and has a lot of modules, both in-house and third party.
The only downside for me: as of late the whole commercial part of the project has been gobbling up everything to shove the non-free version to the point where it’s hard to find info on the free version, e.g., the wiki page that lists all the third-party modules. The nginxtutorials site seems to be a good resource.
Btw one of the main devs forked it into freenginx:
Dounin writes in his announcement that “new non-technical management” at F5 “recently decided that they know better how to run open source projects. In particular, they decided to interfere with security policy nginx uses for years, ignoring both the policy and developers’ position.” While it was “quite understandable,” given their ownership, Dounin wrote that it means he was “no longer able to control which changes are made in nginx,” hence his departure and fork.
Also, fun fact: this is probably the only instance of russian software muricans don’t cry Commie! all the time (maybe because the parent company was acquired).
As a security professional, what finally got me to move from Apache to NGINX was OpenResty.
I sometimes still put Apache behind it, depending on my goals.
NGINX is a bit more hands on than some other options but it’s mature, configurable and there’s a huge amount of information out there for setting it up for various use cases.
in my case, its what I set up when i was first getting into this and it works, so I don’t want to go through setting up anything else.
The only reason that I tend to use it is because of the included webserver. It’s not bad but the paywalling of functionality needed for it to be a proper LB left a bad taste in my mouth. That and HAProxy blows out of the water in all tests that I’ve done over the years where availability is at all a concern. HAProxy also is much more useful when routing TCP.
There’s a lot of good resources for Nginx, it’s fast (faster than Caddy), and has a ton of features, so you can use it for pretty much anything HAProxy, Apache, or Caddy can do, and not sacrifice much in performance.
That said, I mostly use HAProxy and Caddy. Here’s my basic setup:
- HAProxy at the edge VPS - routes requests to machines based on SNI
- WireGuard VPN - connects my internal devices to my VPS
- Caddy in Docker - runs on internal network on my NAS/homelab - manages LetsEncrypt renewals and reverse proxies to internal Docker network
- Nginx in Docker - FE for NextCloud; this simplifies things so all my TLS is handled in one place, and Caddy doesn’t need to touch files
I use a local DNS server on my router so my domains can route directly to Caddy instead of going over the internet when on my network, otherwise I may just have HAProxy handle LetsEncrypt certificates.
From what I can tell, Nginx is a little more efficient than Caddy, but Caddy is plenty fast for my needs. I’m considering switching from NextCloud to the new ownCloud Infinite Scale, and if I do, I’d ditch nginx completely.
Because Nginx Proxy Manager exists.
And also because for me it started from web hosting where Apache and Nginx dominate and later because of many easy to understand example configs from the net including many “docker letsencrypt” examples.
Very much became it exist. Its way simpler to do in the GUI.
Did not have to learn anything specific, and can work for things not in docker containers too, like the Nextcloud Snap.
And it makes it very easy to get and maintain certificates.
I’ll probably look into newer fancier options such as Caddy one day, but as far as I remember Nginx has never failed me : it’s stable, battle tested, and extremely mature. I can’t remember a single time when I’ve been affected by a breaking change (I could not even find one by searching changelogs) and the feature set makes it very versatile. Newer alternatives seem really interesting, but it seems to me they have quite frequent breaking changes and are not as feature rich.
That being said, I’d love to see side-by-side comparison of Nginx and Caddy configs (if anyone wants to translate to Caddy the Nginx caching proxy for OSM I shared earlier this week, that would make a good and useful example), as well as examples of features missing from Nginx. This may give me enough motivation to actually try Caddy :)
(edit : ad->and)
What about envoy proxy?
Nothing else on the market has as low latency implications to workloads that I am aware of.
I have heard a lot about Envoy proxy from Istio but never looked into it for baremetal usage. I’ll keep an eye out, thanks