How to make a plugin that allows you to create a widget with a backend that can send post requests?

Your privacy

By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.

beginner – Content aggregator using bs4 and requests

Standard code review, tell me what’s good, what’s bad, and how to improve. Critical suggestions welcomed. This is a content aggregator using bs4 and requests. I did not use any tutorials or help.

import requests
from bs4 import BeautifulSoup as bs

topic = input('Enter the topic: ').lower()
def getdata(url, headers):
    r = requests.get(url, headers=headers)
    return r.text

def linesplit():
    print('-~'*16, 'COLIN IS MOOCH', '-~'*16, 'n')
headers = {
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
google = getdata(f'{topic}&oq={topic}&aqs=chrome..69i59j69i57j69i59j69i60l3j69i65j69i60.2666j0j7&sourceid=chrome&ie=UTF-8', headers)
soup = bs(google, 'html.parser')
links = str(soup.find_all('div', class_='TbwUpd NJjxre'))
links = links.replace('<div class="TbwUpd NJjxre"><cite class="iUh30 Zu0yb qLRx3b tjvcx">', '')
links = links.replace('<span class="dyjrff qzEoUe">', '')
links = links.replace('</span></cite>< /div>', '')
links = links.replace('<div class="TbwUpd NJjxre"><cite class="iUh30 Zu0yb tjvcx">', '')
links = links.replace('</cite></div>', '')
links = links.replace('</span>', '')
links = links.replace(' › ', "")
links = links.split(', ')
links(-1) = links(-1).replace(')', '')
links(0) = links(0).replace('(', '')
info = ()
counter = 0

for x in range(len(links)):
        htmldata = getdata(links(x), headers)
        newsoup = bs(htmldata, 'html.parser')
        website = ''
        for i in newsoup.find_all('p'):
            website = website + ' ' + i.text
        counter += 1
    except Exception:
    for x in range(0, (counter * 2) + 2, 2):
        if info(x+1) != '':
            print('From ', info(x), ':')
except IndexError:

linux networking – How can I set up a layer 3 bridge using Proxy ARP such that http requests can be made to the inside/proxied host’s IP successfully?

Currently I am using a Raspberry Pi to bridge an ethernet connected printer to wireless internet and have used DNAT successfully to give the printer internet access, manually forwarding the printer’s port 80 to the Rpi’s wlan0 interface port 80 along with other needed ports to access the printer using outside hosts. I’ve also been able to use Proxy ARP so that the printer’s static IP address is visible on the network, the Pi responding to ARP broadcasts on the printer’s behalf and proxying ARP requests for the printer. What I would like to do is combine the functionality of the DNAT approach with the IP separation provided by Proxy ARP.

The problem is that I cannot figure out how to seamlessly accomplish the needed forwarding/spoofing with the Rpi so that instead of directing requests to the Pi’s port 80, outside hosts can make requests using the printer’s IP directly even if it’s on a different subnet, say, to access the http page.

Is it possible to accomplish this routing in tandem with Proxy ARP? Are there other approaches that are better suited for this arrangement, or could IP aliases alongside DNAT accomplish this illusion that the printer’s IP and active ports are also present on the network/another network?

waf – Can a firewall appliance block http requests?

Yes. A firewall appliance (or a firewall application) will be able to distinguish between HTTP and HTTPS requests and, in the case of HTTP requests, will be able to view all the data being transmitted (not just the domain and IP). It can then block or modify any data going through.

You can also do this with HTTPS requests if you have a firewall that supports TLS interception, but then you’ll need to install the firewall appliance’s root certificate into your browser for it to trust it.

malware – Sending meaningless Addresses requests to a series of malicious IPs

i was testing somethings on browsers and i faced a case that when i browse meaningless addresses like abc/ signortest/ word/
the request sent to ip addresses hosted on linode! that some of them was reported malicious.
and i receive two different answers.(pictures attached)

  1. 4001
  2. 503 Service Unavailable

how i can understand what’s going on?

enter image description here
enter image description here
enter image description here

linux – one of two GET requests can’t reach web hook

there is external system which sends two different GET requests one by one to my web hook. One of the requests reaches the web hook and another one – not. I can’t understand why one of the GET requests can’t reach the server. I’m stuck, because I have no idea how to even start investigating it. Environment is Ubuntu (linux) system, web hook is written using Ruby on Rails. If I do GET request from browser it can reach the web hook (I can see it in the logs). The GET request only fails when executing from external system to my web hook. To explain the problem visually I’ve drawn a simple diagram. It seems like something on the webserver where web hook is hosted limits access by request headers or something like that. I have access to the linux server where web hook is hosted and can verify everything if needed. Any help is much appreciated.

enter image description here

Does a single REST API call on SharePoint actually create 29 requests?

This API call


when viewed through chrome devtools, Chrome DevTools

creates, apparently, 29 requests … or am I misunderstanding something? Why would it not be 1 request?

user agent – Requests from a specific older version of Firefox distributed across many Google and Cloudflare IP addresses

I’ve been getting thousands of requests each day from a specific user agent, Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:38.0) Gecko/20100101 Firefox/38.0. They’re all from different IP addresses (the ones I’ve looked up have been Google LLC and Cloudflare), and they all use that exact same user agent.
nginx logs
So far I’ve seen 90 unique IP addresses

ip count

I’ve started giving 403 statuses back to them, since it seems like something is wrong here, but I can’t trace them back to any one specific point. They all seem to be legitimate requests, but they’re just so suspicious with the amount of servers they’re coming from. Am I missing something here?

ubuntu – Proxy probe requests filling up my log files and then my disk?

I have an Ubuntu/Apache 2 web server I set up recently. I quickly ran into a problem with proxy abuse, which I documented here:

I disallowed use of proxy capability on Ubuntu Apache 2 web server. How can I check it?

I set the proxy to only allow local traffic to access it and that solved the problem with my web server grinding to a halt due to all the unwanted traffic. Now my access logs are full of 403 errors. Unfortunately the instance is running on a small disk and today I discovered all the log files literally filled up the disk, leaving Apache 2 without resources and causing it to fail.

I was hoping that the proxy bots would go away after a while but it’s been over a day now and they are still my hammering system, 403 returns or not:

(my domain).com:(ip address) – – (25/Apr/2021:23:03:35 +0000) “GET http://(some proxy request URL)/?p=6&Oct0n=Oct0n HTTP/1.1” 403 493 “http://(some proxy request URL)” “Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36”

What should I do about this?

data – Dynamically Create Dashboards with Post Requests

I wrote a simple web-app that lets players play some game and logs the players actions, i.e. how many points they received in each round. Now I would like to take that data, aggregate it to some useful metrics e.g. winning/losing streakes or total points per player and give a player the option to see dashboards displaying this information nicely.

The userflow should be something like this: If the player presses a button e.g. See stats, they should be shown a URL when where they click it they are taken to a dashboard displaying the game statistics.

So what I need is a solution to dynamically create/destroy these dashboards when players request the stats, but that can also be kept open during games if players want to monitor the game while they play.

How could this be accomplished? I thought of pre-aggregating the data and then sending it via post-request, is there some existing service that caters to this use-case? I imagine building this myself would be a bit tedious.