kvm virtualization – Can’t see docker ports from external machines when using a veth interface with an OPNSense KVM

Quick summary of the setup:

  • Ubuntu Server 20.04 with 4 network ports
  • OPNsense router running in libvirt KVM
  • One port is WAN, three ports are LAN (bridged)
  • Router works great
  • Server (same one running OPNsense) gets access to LAN and internet by VETH through LAN bridge
  • Services run on various ports on the server, and external machines can access them
  • PROBLEM: If running a service in Docker, the service ports can be seen by the serrver, but not from other machines on the LAN (nmap shows them as “filtered”)
  • This is solved by setting the docker container to run in “host” mode, which is obviously sub-optimal since port-mapping is no longer possible

Why can’t external machines see ports exposed by docker in this setup? I understand it’s a complicated networking setup, and there is probably some missing route between docker VLANs and the VETH bridge, but everything I’ve checked looks fine. Docker daemon seems to be configured to listen on all interfaces. I’m at a loss.

docker – Security implications of granting non-root access to privileged ports (

Lots of solutions to this problem e.g. here and here but in order to decide which is best I’d need to know more about the security implications of each solution (or at least in general).

My context: I’m looking into running a rootless Docker/Podman Nginx container (on an Ubuntu Server 20.04 LTS host). Podman gives the following solution with this error message Error: rootlessport cannot expose privileged port 80, you can add 'net.ipv4.ip_unprivileged_port_start=80' to /etc/sysctl.conf (currently 1024) but reading around it doesn’t seem to me like a great solution because it’s giving access to all users.

adb – What is the way to debug multiple android device ie more than number of ports in PC in parallel?

I need to connect 10 device via usb at once. Does adb detect multiple android devices connected via usb hub?
I have explored connecting via tcpip one by one using the existing ports, but the problem is if the LAN is not very stable , it gets disconnected and I have to connect all over again. If I could connect all my devices via usb hub I wont have that issue.

I have done through this issue: Using USB peripherals with hardware debug

Possible to host multiple cPanel sites, but all on different ports besides port 80?

Kind of a unique case here, but I need to be able to host multiple cPanel web sites on a server that I own but have each website available f… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1832849&goto=newpost

Squid Proxy Multiple Ports based different outgoing address configuration not working

I have squid Configuration in ubuntu 20 with multiple listening ports and each port has different outgoing address assigned by using “tcp_outgoing_address” directive. but In Ubuntu my configuration is not working some how. even in any linux distro it’s not working. it works fine with squid for Windows with same config file.
please find sample config for one port as below.


acl proxy4011 myportname 4011 src 0.0.0.0/0

http_access allow proxy4011

tcp_outgoing_address 192.168.11.100 proxy4011

http_port 192.168.1.192:4011 name=4011


Same kind of rules added for some more ports and acl also.

So my question is is there anything on Linux side that is stopping the squid to send the traffic through different nic for each port or I am missing anything in squid Configuration side? I have disabled the firewall and also enable ipv4 forwarding from sysctl file. Is there any default network security in linux that is affecting the squid to perform this operation?

If anyone need to check my full conf file I can paste here.

rkhunter reports numerous warnings on haproxy open ports

Recently I have installed RKhunter (v1.4.2) on a couple of loadbalancers ( Haproxy 2.0.14 ) running on Debian 9. Stretch. While performing a full system check I’m getting a lot or warnings about tcp ports being used by Haproxy. They look like this:

         Use the 'lsof -i' or 'netstat -an' command to check this.
Warning: Network TCP port 13000 is being used by /usr/sbin/haproxy. Possible rootkit: Possible Universal Rootkit (URK) SSH server
         Use the 'lsof -i' or 'netstat -an' command to check this.
Warning: Network TCP port 47018 is being used by /usr/sbin/haproxy. Possible rootkit: Possible Universal Rootkit (URK) component
         Use the 'lsof -i' or 'netstat -an' command to check this.*

Also, it seems that I cannot simply whitelist those ports as they seem to keep changing.
What one would do in this case ?

MMC > Certificates > Other computers > Firewall. Which ports are used for accessing the cert store

I am having a windows firewall issue when trying to access the Certificate Store from another computer on the same domain using MMC or PowerShell. I am logged in as a domain admin. Both servers are windows server 2019.

If I disable windows firewall on the remote computer I can successfully add the snap-in and see the certificates. If I enable the firewall I cannot connect.

enter image description here

I am doing a installation of Service Fabric that comes with a pre-installation check script that also fails with an related error. It cannot access the remote certification store. If I disable the firewall this scripts also runs with success.

enter image description here

If I disable the firewalls the test succeeds.

enter image description here

Anyone knows the firewalls requirements for this?

udp – How to set different TTLHOP for TCP ports

We want to set sensitive TCP/UDP Ports at low TTL/HOP Leaving other ttl/hop at 128 default

data center = 3
Priv net = 8
Internet = 128 default

Ssh 22 = 8 (internal hop max)
Rdp 3389 = 8
Http 80 = 128 (internet)
Https 443 = 128
Epmap 135 = 8
MSSQL 1433 = 3 (inside data center)

Netsh can change default, but we want to change defaults by port.

Might we be able to

1.) Use netsh to set low hop, ttl=8
2.) use listen command to open let’s say port 22.
3.) use netsh to change ttl=3
4.) use listen command to open port 1433,

Etc?

I have no idea if such a thing would work but I’m pretty desperate to set a particular hop TTL for particular TCPUDP ports.

.net 5 can do it for a developed app,

Just not sure how to get it set for a particular general app like SSH or RDP or MSSQL.

I have not found any way to set ports dynamically to the TTL to protect how far it travels.

Appreciate any of your thoughts. I am trying to do this using some form of script batch or setting as opposed to writing and running code. We want to be able to do this without adding any programs.

networking – Ports exposed by docker container are shown as filtered – unable to connect

I am working on a fresh server installation of Ubuntu 20.04
I started a sample nginx by running docker run --rm -p 80:80 nginx
Port 80 appears to be open on the machine, I cant curl the nginx default page though:

$ nmap localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 13:06 GMT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000077s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 998 closed ports
PORT   STATE SERVICE
22/tcp open  ssh
80/tcp open  http

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 81.169.xxx.xxx/32 scope global dynamic eno1
       valid_lft 60728sec preferred_lft 60728sec
    inet6 fe80::225:90ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:70:d9:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:70ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
48: br-49042740d2e8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:63:fe:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:63ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
68: veth17ce2e9@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether d6:e2:53:0b:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::d4e2:53ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever


# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*filter
:INPUT ACCEPT (151:14142)
:FORWARD DROP (15:780)
:OUTPUT ACCEPT (123:16348)
:DOCKER - (0:0)
:DOCKER-ISOLATION-STAGE-1 - (0:0)
:DOCKER-ISOLATION-STAGE-2 - (0:0)
:DOCKER-USER - (0:0)
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-49042740d2e8 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-49042740d2e8 -j DOCKER
-A FORWARD -i br-49042740d2e8 ! -o br-49042740d2e8 -j ACCEPT
-A FORWARD -i br-49042740d2e8 -o br-49042740d2e8 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-49042740d2e8 ! -o br-49042740d2e8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-49042740d2e8 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Sun Nov 15 13:00:57 2020
# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*nat
:PREROUTING ACCEPT (20:1254)
:INPUT ACCEPT (20:1254)
:OUTPUT ACCEPT (0:0)
:POSTROUTING ACCEPT (0:0)
:DOCKER - (0:0)
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.19.0.0/16 ! -o br-49042740d2e8 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i br-49042740d2e8 -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Sun Nov 15 13:00:57 2020

From my local machine, I am unable to connect to the server. Ports are being shown as filtered:

$ nmap example.de -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 14:12 CET
Nmap scan report for example.de (81.169.xxx.xxx)
Host is up (0.037s latency).
rDNS record for 81.169.xxx.xxx: h290xxxx.stratoserver.net
Not shown: 994 closed ports
PORT     STATE    SERVICE
22/tcp   open     ssh
80/tcp   filtered http
135/tcp  filtered msrpc
139/tcp  filtered netbios-ssn
445/tcp  filtered microsoft-ds
9876/tcp filtered sd

Nmap done: 1 IP address (1 host up) scanned in 2.67 seconds

Running the container in network mode host works as expected and I can access the nginx default page via localhost and on my local machine.
docker run --rm --network host nginx

Why is the exposing of the ports not working as expected?
How can I fix this / analyze the problem further?

Adding multiple outgoing IP addresses in Squid with different ports

I have a Squid proxy server running on CentOS 8, the server itself was supplied with an additional /27 subnet. The problem I’m having at the… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1827319&goto=newpost