reverse proxy – Chaining multiple nginx error pages

I’m fighting a bit with nginx and try to get a “chain” of nginx error pages to work. Current state looks like this:

server {
    listen       443 ssl;
    server_name  ~^((?<repo>.*).)example.de$;

    ssl_certificate /etc/pki/tls/certs/cert.pem;
    ssl_certificate_key /etc/pki/tls/private/key.pem;
    client_max_body_size 100M;

    location / {
        proxy_pass http://backend_example/example/${repo}$request_uri;
        proxy_http_version 1.1;
        proxy_buffering off;
        proxy_connect_timeout 300;
        proxy_intercept_errors on;

        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_redirect off;

        rewrite  ^/$  /index.html  permanent;
        error_page 404 /backend_404.html;
    }

    location /backend_404.html {
        proxy_pass http://backend_example/example/${repo}/404.html;
        proxy_intercept_errors on;

        error_page 404 /error.html;
    }

    location /error.html {
        ssi on;
        internal;
        root /usr/share/nginx/html;
    }
}

What’s working:

  • if the file 404.html is available on the backend it will be delivered as intended
  • if the 404.html is not available on the backend I get the standard 404 from nginx instead of the local custom error.html
  • if I replace error_page 404 /backend_404.html; with error_page 404 /error.html; the error.html location also works

What I want to achieve:

  • if 404.html exists on the backend, deliver it
  • if 404.html not exist, deliver a custom error page error.html instead of the default one

domain name system – How do I add hostnames to nginx so they are externally accessible?

Let’s say I have a domain name example.com and want to add a site, like admin.example.com. In nginx I can then set server_name in the configuration of this site to admin.example.com. However, if I try to access the site, it cannot find the IP address.

Do I need to add this to the DNS record itself? Isn’t it possible to make the server itself the DNS server, or to just point all the hostname/subdomains (not sure what the right term is, but the * part in *.example.com) to the servers’ IP in the original DNS record? If I want to add another hostname like admin2.example.com I don’t want to edit the DNS record again.

I’m sure I’m missing something obvious here. I also can’t find any tutorial on this, which I find weird, as it should be something which is rather common.

nginx configuration works over curl, but not in browser?

I have an nginx configuration resembling something like this:

# mysite.com nginx config
server {
        listen 80 default_server;
        listen (::):80 default_server;
        root /usr/share/mysite.com/ui;
        index index.html;
        server_name lbhost;

        location /ads.txt { 
            alias /usr/share/mysite.com/ads.txt; 
        }

        location /api/ {
            proxy_pass       http://localhost:5000/api/;
            proxy_set_header   X-Real-IP $remote_addr;
            proxy_set_header   X-NginX-Proxy true;
            proxy_http_version 1.1;
            proxy_set_header   Upgrade $http_upgrade;
            proxy_set_header   Connection keep-alive;
            proxy_set_header   Host $host;
            proxy_cache_bypass $http_upgrade;
        }

        location / {
            proxy_pass http://localhost:4000;
        }
}

The load-balanced servers have a .NET Core web api and an Angular Universal UI. This seems to work pretty well, but

… the api proxy seems to work, as does the root paths, and (usually) the ads.txt file as well, but… if I do something like

ubuntu@my-host-name:~$ curl http://localhost/api/version

…from the local machine, I’ll get a response like:

2020.5.13.4ubuntu@my-host-name:~$

… yet if I hit this endpoint from a browser, I get a completely empty page… even the source is empty. I would expect to see that text “2020.5.13.4” in the source document of the page at least.

How do I need to configure my nginx service to properly send responses?

webserver – How to protect web server private keys on Ubuntu with Nginx without exposing any plain text credentials?

I’m developing a set of internal websites and services for a customer who has high levels of bureaucracy and strict formal rules about many things, one of them being “not storing passwords in plain text”.

So, when they inspected my system configuration manual, they immediately pointed out that they could not accept storing private key passwords in a text file for Nginx to load on startup. It doesn’t matter that the file is readable only by root.

My arguments, such as “if someone got root access to your server then you have bigger problems than leaked private keys”, “The attacker could extract the keys from server process RAM anyway, no matter what encryption is being used”, “It’s a recursive problem because if I encrypt the password file, Nginx will need the password to decrypt the password file to decrypt the keys” did not work.

It seems, the customer is just used to how IIS works – the private keys are protected by CNG mechanisms and you don’t have to store plain text passwords or keys or API tokens anywhere.

How do I achieve that on Ubuntu and Nginx without making things too messy?

I really don’t want to migrate everything to Windows and then explain the customer why they need one more Windows Server licence when the initial idea was to use free Ubuntu server.

php – Nginx proxy request to url specified in GET variable

I need Nginx to respond to a request like

https://example.org/proxy/?url=https://othersite.com/path/file.php%%a=test123%%b=321tset

or a similar method, like

https://example.org/proxy/https://othersite.com/path/file.php?a=test123&b=321tset

by proxying the request to

https://othersite.com/path/file.php?a=test123&b=321tset

Is there a way to do this with rewriting or a different rule? Any help would be appreciated. Thank you in advance.

nginx – error configuring proxy pass for upstream app

I have an application running as a docker container mapped to port 8080; On this same server I also have nginx also configured to serve a Laravel application which has some URLs that have api at context root https://example.com/api/news for example. The docker application URLs starts with either web/ or api/ so to avoid URL proxy confusions I’m trying to serve all docker container application /comments context path, thereby Laravel requests continue from host/api and docker app URL host/comments/api etc. I have following locations in the configuration (in the order described here)

upstream remark42 {
    server 127.0.0.1:8080 weight=100 max_fails=5 fail_timeout=5;
}

server {
    listen 80;
    server_name www.example.com;
    return 301 https://$server_name$request_uri;
}

server {
    listen 443 ssl http2;
    access_log /var/www/example.com/storage/logs/access.log;
    error_log /var/www/example.com/storage/logs/error.log;

    ssl_certificate /etc/nginx/ssl/example_com_chain.crt;
    ssl_certificate_key /etc/nginx/ssl/example_com.key;

    root /var/www/example.com/public/;
    index index.php index.html index.htm;

    location ~ .php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        fastcgi_pass    unix:/run/php/php7.4-fpm.sock;
        fastcgi_index   index.php;
        fastcgi_param   SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param   APP_ENV  production;
        include         fastcgi_params;
    }

    location /comments {
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Server $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        rewrite ^/comments(.*) /$1 break;
        proxy_pass http://remark42/;
    }

    location ~* .(?:css|js)$ {
        access_log        on;
        etag              on;
        if_modified_since exact;
        add_header Pragma "public";
        add_header        Cache-Control "max-age=31557600, public, must-revalidate, proxy-revalidate";
    }

And then at the end of file

location / {
  try_files $uri $uri/ /index.php?$args;
}

I get a 404 error on accessing https://www.example.com/comments/web/embed.js or https://www.example.com/comments/api/v1/user?blah

with the JS file giving following error in logs

2020/05/18 18:05:54 (error) 3047#3047: *5035 open() "/var/www/example.com/public/comments/web/last-comments.js" failed (2: No such file or directory), client: 49.207.48.221, server: , request: "GET /comments/web/last-comments.js HTTP/2.0", host: "www.example.com", referrer: "https://www.example.com/blogs/yet-another-svn-change-log-tool"

So the proxy pass doesn’t work and it tries to fetch js files from disk.

nginx – use local error_page when remote error_page not found

I’m trying to create a fallback for my error_page. Basically, the logic should be something like the following

load foobar.html > does not exist on remote server -> load 404.html from remote server to show 404 page -> does not exist on remote server – > load 404.html on local filesystem.

I have the following, and loading both localhost/404.html and localhost/global404.html works, but when I break localhost/404.html (by removing the file from the http server) it does not show the global404.html page as I’d expect.


server {
    listen      80;
    server_name example.com www.example.com;
    proxy_intercept_errors on;

    location / {
        proxy_pass http://localhost:3000;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $remote_addr;
        error_page 404 /404.html;
    }

    location /404.html {
        proxy_pass http://localhost:3000/404.html;
        error_page 404 /global404.html;
    }

    location /global404.html {
        root /usr/share/nginx/html;
    }
}

LARAVEL ELASTICBEANSTALK NGINX MODIFICAR CONFIGURACIÓN

estoy subiendo mi aplicación laravel a nginx, de elasticbeanstalk de amazon, cuando lo subo no funcionan las rutas /api/*, en la documentación de Laravel especifica que hay que añadir lo siguiente a la configuración de nginx:

location / { try_files $uri $uri/ /index.php?$query_string; }

Al añadirlo funciona, pero el problema viene al cargar una nueva versión, al cargar la nueva versión el archivo de configuración de nginx vuelve a estar como antes, sin esa linea.

He probado también a añadir lo siguiente es las .ebextensions, pero no funciona

    container_commands:
      stop_nginx:
        command: "sudo service nginx stop"
      crear_nginx_config:
        command: "sudo cp /etc/nginx/nginx.complete.conf /etc/nginx/nginx.conf"
        leader_only: true
      start_nginx:
        command: "sudo service nginx start"
        leader_only: true

Donde nginx.complete.conf es un archivo con toda la configuración inicial mas la linea de antes. No entiendo mucho de los archivos de .ebextensions, y desconozco si estan bien configurados. Gracias de antemano

mysql – WordPress website hosted on nginx ubuntu isn’t loading anymore

I just found out the WordPress website isn’t running anymore. When opening example.in, it simply shows the text Error establishing a database connection. The wp website is the folder /var/www/examplewp

I have other non-PHP based websites running smoothly on the same server. Even the xxx.example.in which is a non-php based website is working.

I tried opening files such as example.in/readmore.html or example.in/hello.txt which I created in the base folder of the WP and that’s working.

Here’s the details:

php -v

PHP 7.2.19-0ubuntu0.18.10.1 (cli) (built: Jun  4 2019 14:46:43) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.19-0ubuntu0.18.10.1, Copyright (c) 1999-2018, by Zend Technologies

uname -a

Linux ubuntu-s-1vcpu-1gb-blr1-01 4.18.0-25-generic #26-Ubuntu SMP Mon Jun 24 09:32:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

This is my xxx_nginx.conf file

server {
        server_name example.in www.example.in;
        root /var/www/examplewp;
        access_log /var/log/nginx/wp_client_access.log;
        error_log /var/log/nginx/wp_client_error.log;

        location / {
                index   index.php index.html;
                #try_files      $uri $uri/ /index.php?$args;
        }
        # Specify a charset
        charset                         utf-8;
        # GZIP
        gzip                            off;

        # Add trailing slash to */wp-admin requests.
        rewrite /wp-admin$ $scheme://$host$uri/ permanent;

        # Prevents hidden files (beginning with a period) from being served
        location ~ /. {
                access_log                      off;
                log_not_found                   off;
                deny                            all;
        }
        ###########
        # SEND EXPIRES HEADERS AND TURN OFF 404 LOGGING
        ###########

        location ~* ^.+.(xml|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
                access_log                      off;
                log_not_found                   off;
                expires                         max;
        }

        # Pass all .php files onto a php-fpm or php-cgi server
        location ~ .php$ {
                try_files                       $uri =404;
                include                         /etc/nginx/fastcgi_params;
                fastcgi_read_timeout            3600s;
                fastcgi_buffer_size             128k;
                fastcgi_buffers                 4 128k;
                fastcgi_param                   SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_pass                    unix:/run/php/php7.2-fpm.sock;
                fastcgi_pass                    unix:/run/php/php7.2-fpm.sock;
                fastcgi_index                   index.php;
        }

        # ROBOTS

         location = /robots.txt {
               allow all;
               log_not_found off;
               access_log off;
        }
        # RESTRICTIONS
        location ~* /(?:uploads|files)/.*.php$ {
                deny all;
        }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.in/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.in/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
    if ($host = www.example.in) {
        return 301 https://$host$request_uri;
    } # managed by Certbot


    if ($host = example.in) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

        server_name example.in www.example.in;
    listen 80;
    return 404; # managed by Certbot
}

I tried restarting the server with sudo service nginx restart but it doesn’t help. Even the HTML or TXT files aren’t opening. I tried with the command line sudo service php7-fpm restart but got the response:

Failed to restart php7-fpm.service: Unit php7-fpm.service not found.

I can open info.php file though which has the phpinfo(); and see all the PHP related files.

I also check the error log but it’s empty: /var/log/nginx/wp_client_error.log

I tried restarting the mysql with this command land: sudo /etc/init.d/mysql start and got the following error:

(....) Starting mysql (via systemctl): mysql.serviceJob for mysql.service failed because the control process exited with error code.
See "systemctl status mysql.service" and "journalctl -xe" for details.
 **failed!**

nginx – How to load balancing docker scale containers

I’m trying to balancing request between two docker containers launched under scale mode and allocated ports:

version: '2'

services:
  nginx:
      image: 'bitnami/nginx:latest'
      depends_on:
        - phpfpm
        - nodejs
      networks:
        - app-tier
      ports:
        - 8000:8000
      volumes:
        - ./nginx/vhost.conf:/opt/bitnami/nginx/conf/server_blocks/vhost.conf

  nodejs:
      image: 'node:12.16.2-alpine'
      command: "npm serve'"
      ports:
        - 4200
      volumes:
        - ../node/:/app
      networks:
        - app-tier
networks:
    app-tier:
      driver: bridge

And add some nginx proxy and make proxy request using inner host and port nodejs:

server {
  listen 0.0.0.0:8000;

  location / {
      proxy_pass http://nodejs:4200;
  }
}

Containers:

dfc75a7f4f2d        node:12.16.2-alpine    "docker-entrypoint.sh"   21 minutes ago      Up 21 minutes       0.0.0.0:32783->4200/tcp   docker_nodejs_2
759477246adf        bitnami/nginx:latest   "/entrypoint.sh /run."   25 minutes ago      Up 25 minutes       8080/tcp, 0.0.0.0:8000->8000/tcp, 8443/tcp         docker_nginx_1
75def14e0a35        node:12.16.2-alpine    "docker-entrypoint.sh"   5 hours ago         Up 25 minutes       0.0.0.0:32781->4200/tcp   docker_nodejs_1

First I’m running one nodejs container using docker-compose up then try to scale using flowiing command

docker-compose up --scale nodejs=2

So in this case I have working only one nodejs service and the second one (which was launched later) didn’t response any request from nginx. How I can fix it or debug and balancing request http://nodejs:4200 betwen scale instances