virtualhost – nginx deny all requests to a virtual host . requests are coming from a nginx tcp forwarder

Im working on a setup with a front seating Nginx host with a upstream backend to loadbalance all tcp packets on port 443 to backend servers.

nginx config of Loadbalancer server running nginx – server C is as below

stream {
        upstream stream_backend {
                hash $remote_addr consistent;
                server 10.15.15.3:443;   ## server A
                server 10.15.15.9:443;   ## server B
        }


        server {
                listen     443;
                proxy_pass stream_backend;
                proxy_timeout 5s;
                proxy_connect_timeout 5s;
        }
}

server A and server B has below nginx.conf. they are identical servers with apps.

it has two virtual hosts running in each. they are working fine.

http {

    server {
        server_name mysite1.example.com;
        listen *:443 ssl;
        listen [::]:443 ssl;
        
        
        location ^~ /static/ {
            ...
        }
        ...
        
        ssl_certificate        file.pem;
        ssl_certificate_key    file.key;
    }


    server {
        server_name mysite2.example.com;
        listen *:443 ssl;
        listen [::]:443 ssl;
        
        
        location /somethin {
            ...
        }
        location /something2{
            ...
        }
        
        ssl_certificate        file.pem;
        ssl_certificate_key    file.key;
    }
}

what I need is to whitelist only few ips to the virtual host mysite1.example.com.
the issue I face is that the nginx running on Server A and B see the load balancer Ip as the client Ip. so when tried adding allow IP; deny all. doesn’t work for any host as it has the load balancer IP on all requests as the client IP.

Can someone guide me on adding proxy IP configs to achieve the above mentioned setup running fine.
Setup is complete except for the IP whitelist issue.

p.s SSl termination happens at the back-end servers , server A and Server B

I’ve searched through the web and found these helpful but still couldn’t figure out how to get it all working.

https://stackoverflow.com/questions/40873393/nginx-real-client-ip-to-tcp-stream-backend
https://www.cyberciti.biz/faq/nginx-redirect-backend-traffic-based-upon-client-ip-address/

tcp – What prevents this specific type of attack from being viable?

Imagine a user has an ip of 1.2.3.4

The server the user intends to connect to has an ip of 2.3.4.5

An attacker has a machine with a promiscuous network card on the user’s local network.

The attacker also has a server on a seperate network with ip 3.4.5.6

The user sends a request to 2.3.4.5, which the attacker had DDOS’d. As such, 2.3.4.5 will not respond.

The attacker’s machine on the user’s local network sniffs the request and sends it to the 3.4.5.6; 3.4.5.6 is set up to take this information to form a request to 1.2.3.4 where it spoofs the IP of 2.3.4.5 and has all the required TCP Sequencing information to form a request that looks real.

When the user sends another request, it is once again sniffed by the attacker’s local machine and sent to 3.4.5.6 which can then send another false request. The cycle continues.

Since 3.4.5.6 appears to be 2.3.4.5 and since 3.4.5.6 is NOT located on the user’s local network, the user’s firewall is unable to detect any foul play.

I’m assuming that this type of attack is not actually possible and that somewhere there is a misconception on my part about how networking works. Why would an attack like this not be possible?

python – WriteString to TCP socket appears to be broken in Mathematica 12.3

I have a large codebase in which some functions make calls to Python TCP servers. The code used to work perfectly in Mathematica 12.2, but has not worked since I updated to Mathematica 12.3 last night.

I have managed to isolate the problem to the call to WriteString. The following is a minimum illustration of the issue. Here is the Python server:

import socketserver
import json
import time


class TCPHandler(socketserver.BaseRequestHandler):
    def handle(self):
        bufSz = 2048
        while(True):
            self.data = self.request.recv(bufSz)
            if self.data:
                jsonReq = json.loads(self.data.strip())
                print(f"Request: {jsonReq}")
                jsonRes = json.dumps({"res": len(jsonReq("arg"))})
                print(f"Response: {jsonRes}")
                self.request.sendall(bytes(jsonRes, "utf-8"))
            else:
                time.sleep(0.10)
                self.data = ""
                continue


def main():
    host, port = "localhost", 9990
    with socketserver.TCPServer((host, port), TCPHandler) as server:
        print(f"Test server now running at {host} on port {port}")
        server.serve_forever()


if __name__ == "__main__":
    main()

And here is the Wolfram Language code that calls it:

ClearAll(sock);
sock = SocketConnect({"localhost", 9990}, "TCP");

ClearAll(params);
params = ExportString(<|"arg" -> "test string"|>, "JSON");

Module({res},
    WriteString(sock, params);
    res = ByteArrayToString@SocketReadMessage(sock);
    ImportString(res, "JSON")
)

This exact set-up used to work for me reliably until Mathematica 12.2. But now the execution hangs at the WriteString command.

I have tried turning off Windows firewall entirely. The problem persists.

Any assistance would be gratefully acknowledged.

Should I use UDP or TCP for logging to a SIEM?

We have an application that runs on hundreds of users’ computers on our company’s internal network. We want to start sending logs from this app to a SIEM (Graylog). We have decided to add code to our app that sends logs from the app to the SIEM directly. The only question is, should we use UDP or TCP to send the logs? My preference is to use TCP because of the reliability, but what happens if the SIEM goes offline — won’t that cause our app to block, thus slowing down our entire system? I am very curious about how other companies handle this situation. I have read a few guides online, and most recommend TCP because of the reliability but none address the blocking issue.

Allow UDP connection only if a TCP connection was made before

I’m trying to figure out how can I allow UDP connections on a port only if a TCP connection was made before, I’ve tried rcheck, but no luck.

centos8 – Configuring Squid to not log TCP connections (lots of “error:transaction-end-before-headers” showing up in logs)

We run Squid proxies in GCP, and are in the process of migrating from CentOS 7 to 8. I’m working on using the GCP Internal L4 load balancer to improve redundancy/failover, and have configured a basic TCP healthcheck which is working fine.

However, it looks like Squid version 4 logs TCP connections. So, every 10 seconds I get 3 entries added to /var/log/squid/access.log:

1618013711.836      0 35.191.10.117 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -
1618013712.256      0 35.191.9.223 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -
1618013712.484      0 35.191.10.121 NONE/000 0 NONE error:transaction-end-before-headers - HIER_NONE/- -

This would generate 25,920 lines a day of logs, which I’d like to avoid. Is there a way to configure Squid to not do this? The default squid.conf file didn’t have much as far as documentation/explanation.

python – How to reproduce a TCP stream via HTTP requests

Suppose a conjuncture where is needed to pass a normal TCP/IP traffic through a HTTP server, like the below scheme:

Client <> SOCKS5 proxy <> HTTP server <> Remote

First, the client connection to remote will be go through a SOCKS5 proxy. Then, the SOCKS5 proxy will be responsible for converting the ongoing data in a HTTP request to another server. Finally, the HTTP server sends the data to the remote server and spits the response, simulating a TCP stream.

For example:

Client → localhost:8080 (SOCKS5 server):

domain test.com.br

localhost:8080 → localhost:8081 (HTTP server):

GET /?target=0:whois.registro.br:43:domain+test.com HTTP/1.1
Host: localhost:8081
Accept: */*
Connection: close

Then, the HTTP server will send domain test.com.br to whois.registro.br:43 and pass the response the way back until the client.

I’ve already written the SOCKS and HTTP server algorithms in Python:

#!/usr/bin/env python3
# socks_server.py

import logging
import socket
import struct
from threading import Thread
from queue import Queue
from time import sleep
from socketserver import ThreadingMixIn, TCPServer, StreamRequestHandler
from base64 import urlsafe_b64encode, b64decode

SOCKS_VERSION = 5

TUNNEL_ADDR = '0.0.0.0'             # http server IP address
TUNNEL_PORT = 8080                  # http server port
TUNNEL_HOST = 'localhost:8080'      # http server 'Host' header

class ListenThread(Thread):
    q_w:Queue                       # worker queue
    client:socket                   # client socket

    def __init__(self, client, args=(), kwargs=None):
        Thread.__init__(self, args=(), kwargs=None)
        self.q_w = Queue()
        self.daemon = True

        self.client = client

    def run(self):
        while True:
            res = self.client.recv(4096)
            self.q_w.put(res)


class ThreadedTCPServer(ThreadingMixIn, TCPServer):
    allow_reuse_address = True
    pass

class SocksProxy(StreamRequestHandler):
    remote_addr:str
    remote_port:int

    def handle(self):
        logging.info('Accepting from %s:%s' % self.client_address)

        header = self.connection.recv(2)
        version, nmethods = struct.unpack('!BB', header)

        assert version == SOCKS_VERSION
        assert nmethods > 0

        methods = self.get_available_methods(nmethods)

        # no auth
        if 0 not in set(methods):
            self.server.close_request(self.request)
            return

        # welcome msg
        self.connection.sendall(struct.pack('!BB', SOCKS_VERSION, 0))

        version, cmd, _, address_type = struct.unpack('!BBBB', self.connection.recv(4))
        assert version == SOCKS_VERSION

        if address_type == 1:
            self.remote_addr = socket.inet_ntoa(self.connection.recv(4))
        elif address_type == 3:
            domain_length = self.connection.recv(1)(0)
            self.remote_addr = self.connection.recv(domain_length)
            self.remote_addr = socket.gethostbyname(self.remote_addr)

        self.remote_port = struct.unpack('!H', self.connection.recv(2))(0)

        try:
            if cmd == 1:
                pass
            else:
                self.server.close_request(self.request)

            addr = struct.unpack('!I', socket.inet_aton(self.remote_addr))(0)
            port = self.remote_port
            reply = struct.pack('!BBBBIH', SOCKS_VERSION, 0, 0, 1, addr, port)

        except:
            reply = self.generate_failed_reply(address_type, SOCKS_VERSION)

        self.connection.sendall(reply)

        # data exchange
        if reply(1) == 0 and cmd == 1:
            self.exchange_loop(self.connection)

        self.server.close_request(self.request)
        


    def get_available_methods(self, n):
        return ( ord(self.connection.recv(1)) for i in range(n) )
    
    def generate_failed_reply(self):
        return struck.pack('!BBBBIH', SOCKS_VERSION, error_number, 0, address_type, 0, 0)

    # relevant part here
    def exchange_loop(self, client):

        packetnum = 0

        while True:

            # reads from client
            req = b''
            while True:
                chunk = client.recv(4096)
                req += chunk

                if len(chunk) < 4096:
                    break

            segments = (
                str(packetnum).encode(),
                self.remote_addr.encode(),
                str(self.remote_port).encode(),
                req
            )

            segments = ( urlsafe_b64encode(s).decode() for s in segments )
            segments = ':'.join(segments)

            data = 'GET /?target=%s HTTP/1.1rn' % segments
            data += 'Host: %srn' % TUNNEL_HOST
            data += 'Accept: */*rn'
            data += 'Connection: closernrn'

            print('>> ' + repr(data))

            # connects to the HTTP server and send request
            s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
            s.connect((TUNNEL_ADDR, TUNNEL_PORT))
            s.sendall(data.encode())

            res = b''
            while True:
                chunk = s.recv(4096, socket.MSG_WAITALL)
                res += chunk

                if len(chunk) < 4096:
                    break

            s.close()

            print('<< ' + repr(res))

            if res.decode().find('HTTP') != 0:
                raise Exception('not a HTTP response')

            # finally, retrieves response to client
            res = res.decode().split('rnrn', 1)(1)
            res = b64decode(res)

            print(res)

            if client.send(res) <= 0:
                break

            packetnum += 1

if __name__ == '__main__':
    with ThreadedTCPServer(('0.0.0.0', 9011), SocksProxy) as server:
        server.serve_forever()

The HTTP server spawns a thread for each new connection, then append it to a dict for being reused. In our purpose, the proxy must be able to send and receive more than one packet per connection.

#!/usr/bin/env python3
# http_server.py

from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib import parse
from base64 import urlsafe_b64decode, b64encode
from queue import Queue

import threading
import socket

class SocketThread(threading.Thread):
    socket.socket       # socket to remote server    
    q_m:Queue           # main queue (worker -> main)
    q_w:Queue           # worker queue (main -> worker)

    addr:str            # remote ip address
    port:int            # remote port

    def __init__(self, queue, args=(), kwargs=None):
        threading.Thread.__init__(self, args=(), kwargs=None)
        self.q_m = queue
        self.q_w = Queue()
        self.daemon = True
        
        (self.addr, self.port, self.r) = args

    def run(self):
        print(threading.current_thread().getName())
        
        self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.socket.connect((self.addr, self.port))
        
        while True:
            try:
                val = self.q_w.get()

                bc = self.socket.sendall(val if isinstance(val, bytes) else val.encode())
                print(f'{bc} bytes sent to {self.addr}:{self.port}')

                ret = b''
                while True:
                    chunk = self.socket.recv(4096, socket.MSG_WAITALL)
                    ret += chunk

                    if len(chunk) < 4096:
                        break
                    
                ret = b64encode(ret)
                print('Got response!')

                self.r.push_packet(ret)
                self.q_m.put(ret)

            except:
                self.q_m.put('CONN_CLOSE')
                raise
                break


class Handler(BaseHTTPRequestHandler):
    protocol_version = 'HTTP/1.1'
    connections:dict = {}   # dict containg socket threads

    def __init__(self, *args, **kwargs):
        super(BaseHTTPRequestHandler, self).__init__(*args, **kwargs)

    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-type', 'text/plain')

        try:
            parsed_uri = parse.urlsplit(self.path)
            query = dict(parse.parse_qsl(parsed_uri.query))

            segments = query('target').split(':')
            segments = ( urlsafe_b64decode(s) for s in segments )

            assert len(segments) == 4

            self.packetnum = int(segments(0) or 0)
            self.port = int(segments(2) or 0)
            self.addr = segments(1)
            self.data = segments(3)

            assert self.addr != None
            assert self.port != 0

            if self.packetnum == 0 or self.data:
                conn_name = '%s:%d' % (self.addr, self.port)
                if not self.connections.get(conn_name):
                    print('New connection, spawning new thread...')
                    q = Queue()
                    self.connections(conn_name) = SocketThread(q, args=(self.addr, self.port, self.r))
                    self.connections(conn_name).start()
                else:
                    print('Reusing thread')

                print(f'name: {conn_name}')
                self.connections(conn_name).q_w.put(self.data)

                ret = self.connections(conn_name).q_m.get()

                if ret == 'CONN_CLOSE':
                    print(ret)
                    self.connections(conn_name) = None
                    return

                print(repr(ret))
                self.send_header('Content-length', len(ret))
                self.end_headers()
                self.wfile.write(ret)

        except KeyError:
            self.end_headers()
            raise


def main():
    server = HTTPServer(('0.0.0.0', 8080), Handler)
    server.serve_forever()

if __name__ == '__main__':
    main()

This, however, doesn’t seem to be working. The SOCKS server suddenly blocks at some points, and I couldn’t yet figure out how to handle connection close. Could someone give me a hand to put it working?

How to route all outbound TCP to localhost:8080 with pfctl?

I’m looking for an updated (Big Sur) MacOS alternative for this iptables command:

linux iptables

sudo sysctl net.ipv4.ip_forward=1

sudo iptables -t nat -I PREROUTING -p tcp --dport 55 -j REDIRECT --to-port 8080

E.g. allow port forwarding, and forward all tcp traffic destined to port 55 to a tcp proxy listening at 127.0.0.1:8080

Trying to accomplish the same, I got to the following on Mac:

mac pfctl

sudo sysctl -w net.inet.ip.forwarding=1

echo "rdr pass inet proto tcp from any to any port 55 -> 127.0.0.1 port 8080" | sudo pfctl -ef -

This however doesn’t work, and instead clogs the packets in a way that they never reach my proxy.

Any help appreciated.

Tracking TCP Connection in background

I am looking for a daemon utility to track all non local TCP connections and which binaries establish the TCP connections (actively and passively) with which IPs and ports.

auditd seems like a great tool.

Following this post, I notice that the following rule captures all connections:
auditctl -a exit,always -F arch=b64 -S connect -k MYCONNECT

I see many entries like these:

type=SOCKADDR msg=audit(04/01/2021 10:54:23.327:397) : saddr={ fam=local path=/dev/log } 
type=SYSCALL msg=audit(04/01/2021 10:54:23.327:397) : arch=x86_64 syscall=connect success=yes exit=0 a0=0x4 a1=0x7fc64b29a6c0 a2=0x6e a3=0x20656c62616e6520 items=1 ppid=3116 pid=3156 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=2 comm=sudo exe=/usr/bin/sudo key=MYCONNECT 
type=SOCKADDR msg=audit(04/01/2021 10:54:23.328:403) : saddr={ fam=local path=/var/run/dbus/system_bus_socket } 
type=SYSCALL msg=audit(04/01/2021 10:54:23.328:403) : arch=x86_64 syscall=connect success=yes exit=0 a0=0x4 a1=0x55e28814cac8 a2=0x21 a3=0x7fff6e3462d0 items=1 ppid=3116 pid=3156 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=2 comm=sudo exe=/usr/bin/sudo key=MYCONNECT 

I wonder whether there is a way to filter by the AF family, limiting to IPv4 and IPv6.

I can add a filter to capture socket system call with AF family = IPv4 or IPv6. But for connect system call, I am not sure how to do so.

Thanks.

linux – TCP Packet drops on application server

I have a monitoring application (zabbix proxy) installed on rhel 7.8 and since we have a very large environment, we have 2500+ agents connecting to this one server. We’re seeing frequent errors while trying to connect to the sever from the agent. The telnet seems to be working but only intermittently.

I increased the net.core.somaxconn limit to the absolute maximum but don’t see any notable effect.

I see something like the following on trying to see netstat details.

SYN_RECV 168 
CLOSE_WAIT 4 
ESTABLISHED 196 
FIN_WAIT1 3 
TIME_WAIT 1151

Also, see the Recv-Q and Send-Q values are either both 128 or recv-q is 129 and the other is 128

# ss -ntl '( sport = :10051 )'
State       Recv-Q Send-Q                                     Local Address:Port                                                    Peer Address:Port
LISTEN      129    128                                                    *:10051                                                              *:*

I’ve already tried to modify the following params that I’ve come across while trying to find a solution around the packet drops

sysctl -w net.core.somaxconn=65535
sysctl -w net.core.netdev_max_backlog=65535
sysctl -w net.ipv4.tcp_fin_timeout=15
sysctl -w net.ipv4.tcp_syn_retries=2
sysctl -w net.ipv4.tcp_synack_retries=2
sysctl -w net.core.rmem_default=31457280
sysctl -w net.core.rmem_max=12582912

But I still see this in netstat -s

# netstat -s | grep -i list
    4741435797 times the listen queue of a socket overflowed
    4791100644 SYNs to LISTEN sockets dropped
# netstat -s | grep -i list
    4741436773 times the listen queue of a socket overflowed
    4791101620 SYNs to LISTEN sockets dropped
# netstat -s | grep -i list
    4741438013 times the listen queue of a socket overflowed
    4791102860 SYNs to LISTEN sockets dropped

I’m not really a Linux admin and am at my wits end. Any details around how to resolve this would be much appreciated.

Thanks in advance for your help!

Regards,
Karan