createAsyncFunction do not rejecting

i use createAsyncThunk for creating acc in firebase as below. But if i have any kind of error e.g 400 cuz email is invalid or wrong userKey, error is not catching and action is fulfilled not rejected.

export const signup = createAsyncThunk(
  actionTypes.SIGNUP,
  async (form, { rejectWithValue }) => {
    const { username, email, password } = form;
    try {
      const form = {
        email,
        password,
        returnSecureToken: true,
      };
      const url = `it's working :<`;
      const res = await fetch(url, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify(form),
      });
      const data = await res.json();
    
      return {
        email: data.email,
        userId: data.localId,
        isAuth: true,
      };
    } catch (err) {
      rejectWithValue(err);
    }
  }
);

Thanks for help.

blockchain – Miners forego revenue in order to 51% attack the Bitcoin protocol by rejecting blocks

I was recently listening to Mike Green and Anthony Pompliano debate the future of Bitcoin on RealVision. And Mike raised an interesting point regarding Bitcoin (I’m paraphrasing). “Scenario: Let’s say the majority of miners, which are no longer bound by financial incentives, decides to attack the Bitcoin network by continuously rejecting blocks, could they do it? Could you essentially reject-spam Bitcoin’s network?”

That, I found to be an interesting question to which I haven’t been able to find an answer. Is anyone able to shed light on the plausibility of this scenario?

postfix – Postgrey: Put greylisted emails to a directory instead of rejecting them totally

I’m docker-mailserver to manage my incoming and outgoing emails. One annoying thing is that some of the emails are greylisted (e.g. from amazon or digitalocean). I’ve already whitelisted these domains, but was thinking if there is an option to put incoming, greylisted emails into a dedicated directory on my inbox.

So instead of rejecting some emails completely it would put them into “postgrey” directory (and responding the sender that the message was rejected, as usually).

I’ve looked into the documentation but was unable to find such an option.

7 – OG Malinglist rejecting basic email sent to Organic Group

I am using OG Mailinglist 7.x-1.1-alpha2.
We have been using this module for years now.
We have a new user who is attempting to send emails to a couple of our lists which are the email part of Organic Groups which the module OG Mailinglist turns into listservs.
This new user is sending basic emails with no inline images or attachments and is getting each email sent rejected with the rejection response saying the emails could not be read properly.

Usually when this happens this is because of a bug with the library used to decode the emails in that the library malfunctions when you have both an attachment and an inline image in the body of the email, but the person having the issue is just sending text emails.

We would appreciate any insight.

Thank you!

HAProxy rejecting connections with low resource usage

I'm having problems with my haproxy servers refusing new connections (or expiring them) after a certain threshold. Proxy servers are AWS c5.large EC2 with 2 processors and 4 GB of RAM. The same configuration is used for the two types of connection on our site, we have one for the Websocket connections which generally have between 2K-4K simultaneous connections and a request rate of approximately 10 / s. The other is for normal web traffic with nginx as the backend with approximately 400-500 simultaneous connections and a request rate of approximately 100-150 / s. Typical CPU usage for both concerns 3-5% on the haproxy process, with 2-3% memory used for the websocket proxy (40-60 MB) and 1-3% of the memory used for the Web proxy (30-40 MB).

By the attached configuration, the processors are mapped to the two processors, with one process and two threads running. Both types of traffic typically account for 95% (or more) of SSL traffic. I looked at the proxy information using watch -n 1 & # 39; echo "display information" | socat unix: /run/haproxy/admin.sock – & # 39; to see if I'm reaching my limits, which doesn't seem to be the case.

During periods of heavy traffic, and when we start to see problems, it is when our simultaneous Websocket connections reach around 5K and the rate of web requests reached 400 requests / s. I'm mentioning both servers here because I know the configuration can handle high concurrent connections and demand rate, but I'm missing another resource limit. Under normal conditions, everything works fine; however, the problems we see are ERR_CONNECTION_TIMED_OUT (from chrome) type errors. I never see 502 errors. I don't see any other process using more CPU or memory on the server. I also attach some other possibly relevant configurations, such as the definition of my limits and the sysctl parameters.

Any ideas on what I might be missing? I read High and ps aux | grep haproxy wrong and see the wrong use of the processor / mem? Am I missing a TCP connection limit? The backend servers (nginx / websocket) work, but never seem to be taxed. We tested them under load with many more connections and traffic and we were limited by the proxy well before limiting the main servers.

Thank you so much.

haproxy.cfg:

global
    ulimit-n 300057
    quiet
    maxconn 150000
    maxconnrate 1000
    nbproc 1
    nbthread 2
    cpu-map auto:1/1-2 0-1

    daemon
    stats socket /run/haproxy/admin.sock mode 600 level admin
    stats timeout 2m
    log 127.0.0.1:514 local0
    ca-base /etc/ssl/certs
    crt-base /etc/ssl/private
    ssl-default-bind-options no-sslv3 no-tlsv10
    ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL:!RC4

defaults
    maxconn 150000
    mode http
    log global
    option forwardfor
    timeout client 30s
    timeout server 120s
    timeout connect 10s
    timeout queue 60s
    timeout http-request 20s

frontend default_proxy
    option httplog
    bind :80
    bind :443 ssl crt /etc/haproxy/ssl.pem
    ... acl stuff which may route to a different backend
    ... acl for websocket traffic
    use_backend websocket if websocket_acl
    default_backend default_web

backend default_web
    log global
    option httpclose
    option http-server-close
    option checkcache
    balance roundrobin
    option httpchk HEAD /index.php HTTP/1.1rnHost:website.com
    server web1 192.168.1.2:80 check inter 6000 weight 1
    server web2 192.168.1.3:80 check inter 6000 weight 1

backend websocket
    #   no option checkcache
    option httpclose
    option http-server-close
    balance roundrobin
    server websocket-1 192.168.1.4:80 check inter 6000 weight 1
    server websocket-2 192.168.1.5:80 check inter 6000 weight 1

Release haproxy -vv:

HA-Proxy version 1.8.23-1ppa1~xenial 2019/11/26
Copyright 2000-2019 Willy Tarreau 

Build options :
  TARGET  = linux2628
  CPU     = generic
  CC      = gcc
  CFLAGS  = -O2 -g -O2 -fPIE -fstack-protector-strong -Wformat -    Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_SYSTEMD=1 USE_PCRE2=1 USE_PCRE2_JIT=1 USE_NS=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g  1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.1
Built with transparent proxy support using: IP_TRANSPARENT         IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE2 version : 10.21 2016-01-12
PCRE2 library supports JIT : yes
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
      epoll : pref=300,  test result OK
       poll : pref=200,  test result OK
     select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
    (SPOE) spoe
    (COMP) compression
    (TRACE) trace

limits.conf:

* soft nofile 120000
* soft nproc 120000

sysctl.conf:

net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_syncookies=1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.ip_local_port_range = 1024 65023
net.ipv4.tcp_max_syn_backlog = 50000
net.ipv4.tcp_max_tw_buckets = 400000
net.ipv4.tcp_max_orphans = 60000
net.ipv4.tcp_synack_retries = 3
net.core.somaxconn = 50000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.core.netdev_max_backlog = 50000
fs.epoll.max_user_instances = 10000

Typical with load with 330 simultaneous connections and 80 requests / s ps aux | grep haproxy production:

root      8122  4.5  1.2 159052 46200 ?        Ssl  Jan28  40:56 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790
root     12893  0.0  0.3  49720 12832 ?        Ss   Jan21   0:00 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -sf 29790

Prove the undecidability of an MT accepting at least one entry AND rejecting at least one entry

It is necessary to prove the undecidability of this Turing machine by reduction. Thoughts?

status – Correct display for rejecting the request

I am currently working on a project where the user submits a media request, which will then be reviewed by another user (verifier).

I've made the information display, but I'm not sure how to display the "rejection reason". For me, this seems to be important, so I thought put it at the top of the page, but this "rejection reason" uses the WYSIWYG editor, which means photos / attachment, or any text formatting is possible.

Also, I thought … does it make sense to show more details of the request, or should I just show the "Reason for Rejection"?

Please note that once the request is rejected, we will not allow it to be published. So, the user must create a request again. (So ‚Äč‚Äčthat our process, if it's wrong, let me know)

enter the description of the image here
This is the original wire of the idea, think of it as youtube as

http – How strict should I be in rejecting unexpected query parameters?

TL; DR Is it recommended to return an HTTP 400 Request incorrect request if additional parameters are sent with a request?

I am creating a web application and testing with OWASP ZAP. I am quite happy with the results – the errors I receive seem to be all low confidence and when I inspect them in detail, I find that the ZAP tool has not really changed anything on demand. But this leads me to a question of "higher level", which is better explained by an example:

OWASP ZAP would have found an SQL injection with the following URL:

http://example.com/api/client/1?query=%27+AND+%271%27%35%271%27+--+

in "human-readable" form, namely:

http://example.com/api/client/1?query=query & # 39; AND & # 39; 1 & # 39; = & # 39; 1 & # 39; -

Looks like a fairly standard SQL injection attack. Now this endpoint is meant to return a customer object in JSON form, and this request is made via OWASP ZAP. The server returned exactly what was expected, the client with the ID = 1. I do everything that the ZAP OWASP documentation recommends with respect to the SQL injections. What should I do more?

It seems to me that I just do not know what OWASP ZAP expects to receive in response to this type of attack – an answer error, perhaps? The documents are useful, somehow "here's how to interface with the database", but I do not know if I have to answer with an error.

Should I return a 400 Bad Request error if unsuitable query parameters are provided?

ubuntu – iRedmail server rejecting mail sent from another internal server

We have an iRedMail server (Ubuntu 18.04), a node server (Raspbian Stretch) serving as a mail sender (Nodemailer) for sending emails from our main application and a node instance (also Stretch) handling the words of passes email accounts. (We configure all this, the mail server is hosted at the office.) We have all the messages (I can send / receive to / from gmail, yahoo, etc.), except the mail sent to the recipients on the new email. -domaine.com.

Error: Recipient Command Failed: 554 5.7.1: Client Guest Rejected: Access Denied

I tried to comment
-o smtpd_relay_restrictions = permit_sasl_authenticated, refuse
in /etc/postfix/master.cf – unsuccessful

nmap on the local ip:

SERVICE STATE OF THE PORT

22 / tcp open ssh

Open smtp 25 / tcp

80 / tcp open http

110 / tcp open pop3

143 / tcp open imap

443 / tcp open https

587 / tcp open bid

993 / tcp open imaps

995 / tcp open pop3s

The nodemailer server sends everyone well

What should I try next?