linux – Centos7 iptables nor persistent after reboot

I have researched and found a lot of consistent information on how to proceed.

However, it still does not work.

I've put together a couple of iptables rules.
I save them with: sudo service iptables save

I check / etc / sysconfig / iptables and the rules are recorded there.

I then restart the system and when it comes back, the new rules are not present.

Watch / etc / sysconfig / iptablesthey are always there.

I then proceed to do sudo systemctl restart iptables and the rules appear again.

Every time I restart, I have to rerun the restart command.

Pointers?

Thank you!

Amazon Web Services – aws iptables nat works on eth0 but not on eth1

I have a very simple configuration: two instances t2.micro, one with eth0 and the other with eth0 and eth1, both in the same VPC with a subnet 10.0.0.0/24 in 10.0.0.0 / 16.

All I try to do is that the traffic coming from the Internet is routed through a t2 in the other and that it returns.

Here is the test setup, followed by what works, then what does not work. I have to run the second scenario and I can not understand how.

/ proc / sys / net / ipv4 / ip_forward = 1

t2-A: eth0 Private IP 10.0.0.120 EIP a0.b0.c0.d0
eth1 private IP 10.0.0.16 EIP a1.b1.c1.d1

t2-B: eth0 private IP 10.0.0.113

I can ping a0.b0.c0.d0, the pings arrive at 10.0.0.120, are NATed and routed to 10.0.0.113, and ping responses are returned to me a0.b0.c0.d0.

Just these two rules:
iptables -t nat -I PREROUTING -i eth0 -p icmp -d DNAT -to 10.0.0.113
iptables -t nat -I POSTROUTING -o eth0 -p icmp -j MASQUERADE

But if I try to do the same thing with eth1, I can not make it work:

iptables -t nat -I PREROUTING -i eth1 -p icmp -d DNAT -to 10.0.0.113
iptables -t nat -I POSTROUTING -o eth1 -p icmp -j MASQUERADE

Ping a1.b1.c1.d1 does not work. I can see the pings reach 10.0.0.16, and nothing else happens after that. The pings never appear on 10.0.0.113 or on any other interface. It is obvious that the ping responses are not sent.

When I first encountered this problem, I opened an aws support ticket and they suggested that it was an asymmetric routing problem. They also recommended me to do the following: something about rule-based routing:

ip route add default via 10.0.0.1 dev eth0 table 1
ip route add default via 10.0.0.1 dev eth1 table 2
ip rule add since 10.0.0.120/32 table 1 priority 500
ip rule add since 10.0.0.16/32 table 2 priority 600

I did it, but that had no effect on the problem.

Do you have any ideas?

firewall – Iptables limit flag flag

I've tried adding an iptable rule

sudo iptables -A COUNTER -m limit -n - limit 1024 / sec - burst limit 1024 -j ACCEPT

But when I try to see the rules using iptables-save -c or iptables -nvxL, the output is

-A COUNTER -m limit - limit 1111 / s - limit-burst 1024 -j ACCEPT

I can not understand how the 1024 / s is translated to 1111 / sec

I've tried with other limits as well and few limits have been changed. Someone can explain what's going on

Similarly if I try with 900 / sec, it is translated to 909 / sec

iptables – cloud-netconfig: adding multiple addresses to eth0 interface

We have IPTABLE (Natting) Server with SLES 12 running on Azure. We have multiple IP addresses assigned on the network interface of a virtual machine, but we have not configured them in the guest operating system.
The goal is to create NAT rules to transfer traffic to servers that are not accessible from an on-premises environment. Now the communication is good and you can also access the remote servers once the NAT rules are in place.

https://github.com/SUSE-Enceladus/cloud-netconfig

But in / var / log / message, we see messages frequently as follows

2019-05-15T12: 05: 09.516913-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "request / Too".
2019-05-15T12: 05: 09.516979-05: 00 mznaplapipt002 bash[63729]: <13>May 15 12:05:08 cloud-netconfig: adding multiple addresses to eth0 interface
2019-05-15T12: 05: 09.517042-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.517106-05: 00 mznaplapipt002 bash[63729]: <13>May 15 12:05:08 cloud-netconfig: adding address queries to eth0 interface
2019-05-15T12: 05: 09.517216-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".
2019-05-15T12: 05: 09.517332-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.517395-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".
2019-05-15T12: 05: 09.517444-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "Too".
2019-05-15T12: 05: 09.517506-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.517569-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".
2019-05-15T12: 05: 09.517631-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.517709-05: 00 mznaplapipt002 systemd[1]: Started updating the cloud-netconfig configuration.
2019-05-15T12: 05: 09.517936-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".
2019-05-15T12: 05: 09.518003-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "Too".
2019-05-15T12: 05: 09.518055-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.518146-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".
2019-05-15T12: 05: 09.518232-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "many".
2019-05-15T12: 05: 09.518295-05: 00 mznaplapipt002 bash[63729]: Error: the inet prefix is ​​expected rather than "requests".

centos – Site-to-Site VPN Routing / Rules Firewalld / Iptables

I have a site-to-site tunnel configured successfully. I have problems with the local network of SITE1 to access the local network of SITE2.

Hosts on LAN2 can ping the gateway on SITE1, while hosts on LAN1 can ping (and SSH) GW on LAN2. However, I can not send ping or SSH commands to hosts behind the GW.

Ipv4 transfer is enabled.

When I trace a LAN1 host route to the LAN2 host, the trace dies at the LAN2 network power level. This tells me that my roads are good at least for every GW, so what should I do on the LAN2 GW to allow ping and SSH through the tunnel?

The GW at LAN2 is Centos7 and here is the config:
ETH0 = LAN, firewall zone = INTERNAL
WLANO = WAN firewall zone = external
TUN0 = Tunnel firewall zone = TUNNEL

How to allow SSH between a LAN1 host via GW on LAN2 to a host on LAN2?

EDIT1: On the GW on SITE2, there is a route for LAN2 on ETH0 and on LAN1 on TUN0.

iptables – ip6tables rules to allow 80 and 443 port traffic to only a few specific IP addresses

Using Ubuntu 18.04 and successfully configuring nginx and uwsgi to host multiple websites.

I have an AAAA record mapped to an IPV6 address via my DNS provider and my nginx configuration file listens on ports 80 and 443 for this IPV6 address.

This configuration works very well.

However, I would like to limit IPV6 traffic to ports 80 and 443 to ONLY a few specific IP addresses.

When I list the current ip6table rules using ip6tables -S, there is a line down like this.

-U ufw6-user-input -p tcp -m multiport -port 80 443 -m comment -how "dapp_Nginx% 20Full " -j ACCEPT

I'm new to iptables in general, but of all the readings and tutorials I've done, it looks like:

  1. You have to make sure the rules are in the right order.
  2. You want to save the rules to a file before making any changes in case you make something happen.
  3. Once you have your rules as you wish, you want to use something like the persistence flag so that the rules persist in case of a reboot.

My question is: what rules do I need to have to achieve the goal stated above, and in what order, and will it only apply to ipv6 traffic on 80/443 ports?

iptables – A firewall blocking my Apache

Hello the configuration of my server is ala 32 bit but with centos 6.10
I'm wearing mysql-5.0 PHP 5.2.17 Apache 2.2 all reboots nicely and the name reboots well indeed indeed my server has a good Apache life ONLY when the iptables are stopped once i've started iptables the lights come on extinguished

Here are some facts about ip tables

iptables -S
[root@SERVER ~]# iptables -S
-PROP ENTRY ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -p udp -m state -state NEW -m udp -dport 53 -j ACCEPT
-A INPUT -p tcp -m state -state NEW -m tcp -dport 53 -j ACCEPT
-A INPUT -m state -state RELATED, ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp -dport 80 -m state -state NEW, ESTABLISHED -j ACCEPT
-A INPUT -i eth0 -p tcp -m tcp -dport 80 -m state -state NEW, ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m state -state NEW -m tcp -dport 22 -j ACCEPT
-A INPUT -j REJECT –reject-with icmp-host-allowed
-A INPUT -p tcp -m tcp -dport 80 -j ACCEPT
-A INPUT -p tcp -m state -state NEW -m tcp -dport 80 -j ACCEPT
-A INPUT -s 90.80.70.80 / 32 -p tcp -m state -state NEW -m tcp -dport 80 -j ACCEPT
-A INPUT -p tcp -m tcp -dport 80 -j ACCEPT


As far as I know, there is nothing wrong then why the firewall blocks Apache when Iptables reboots?

transfer the complete IP address to another IP address with all ports with iptables

Better,
I have 2 virtual private servers. I want me to use only 1 ip. So, for example, each port redirected to ip 0.0.0.0::port (ip 1) is redirected to 0.0.0.1:port (ip 2) how to set this in ubuntu? Thank you!

iptables – Properly configure fail2ban for the ssh server in the Docker container

I have a fail2ban based on the configuration. I run a container that runs an SSH service and I see a lot of "strange" connections. I've configured docker to send the container log to the Systemd log, and I use it as a source for fail2ban. Here is the filter I use in F2B:

[sshd_docker]



enabled = true
port = 22
filter = sshd[__prefix_line="^s*S+s+[^[]+ [w+]:[^]]+ ]:  s + ", __pref =" ", journalmatch =" CONTAINER_NAME = sshdocker "]action_ =% (banaction) s[name=%(__name__)s, bantime="%(bantime)s", port="%(port)s", protocol="%(protocol)s", chain="DOCKER-USER"]

In fact, this filter works, as I can see in IPtables:

# iptables -t filter -L FORWARD
Chain FORWARD (DROP policy)
target target target
DOCKER-USER all - anywhere from anywhere
[....]

# iptables -t filter -L DOCKER-USER
Chain DOCKER-USER (1 references)
target target target
f2b-sshd_docker tcp - does not matter anywhere anywhere sport multiport ssh
[....]

#iptables -t filter -L f2b-sshd_docker
Chain f2b-sshd_docker (1 references)
target target target
REJECT ALL - 96.9.168.71 any where reject-with icmp-port-inaccessible
REJECT ALL - 94.96.68.78 any where reject-with icmp-port-inaccessible
[....]

I think that fail2ban correctly sets the string in iptables and updates them correctly. However, I still see incoming connections in the container log, although they are forbidden (https://pastebin.com/K3EwQMGG).

What am I doing wrong?

static routes – Multiple interface in iproute2 and iptables

The following is my network environment, ppp0 is my default internet:

enter the description of the image here

My iptables:

* nat
-A PREROUTING -i ppp0 -p tcp -m tcp -dport 80 -d DNAT -to-destination 10.1.11.10:80
-A PREROUTING -i ppp0 -p tcp -m tcp -port 443 -d DNAT-to-destination 10.1.11.10:443
-A PREROUTING -d 1.2.3.4/32 -p tcp -dport 80 -d DNAT -to-destination 10.1.11.200:80
-A PREROUTING -d 1.2.3.4/32 -p tcp -dport 443 -d DNAT -to-destination 10.1.11.200:443

-A POSTROUTING -d 10.8.0.0/24 -o vpn0 -j SNAT -to-source 10.8.0.254
-A POSTROUTING -s 10.1.11.0/24 -o ppp0 -j MASQUERADE
-A POSTROUTING -s 192.168.1.0/24 -o ppp0 -j MASQUERADE

*mutilate
-A EXIT -t mangle -p tcp -d 1.2.1.2/32 --dport 22 -j MARK --set-mark 1

*filtered
-A INPUT -s 10.1.11.0/24 -j ACCEPT
-A INPUT -s 192.168.1.0/24 -j ACCEPT

My route chart:

root @ abc: ~ cat # / etc / iproute2 / rt_tables
255 local
254 main
253 by default
0 unspecified
150 ext.out

My iproute order:

root @ abc: ~ # ip route adds the default settings via the ext table. out 1.2.3.254
root @ abc: ~ # ip rule added from all fwmark 1 ext.out tables
root @ abc: ~ # ip route dump cache

My question is:

  1. how do i leave ssh to 1.2.1.2 using Internet 2?
  2. why will the lan0 web service connect to 1.2.3.4 connect the gateway automatically?