web crawlers – Error on public website when web scraping is called through my personal user agent

I’m brand new to coding and have finished making a simple program to web scrape some stock websites for particular data. The simplified code looks like this:

headers = {'User-Agent': 'Personal_User_Agent'}

fv = f"https://finviz.com/quote.ashx?t=JAGX"

r_fv = requests.get(fv, headers=headers)
soup_fv = BeautifulSoup(r_fv.text, 'html.parser')
fv_ticker_title = soup_fv.find('title')
print(fv_ticker_title)

The website would not work until I created a user agent, but then it worked fine. I then creating a website through python’s local host which also worked fine, and so I thought I was ready to make the website public via “python anywhere”.

However, when I went to create the public website, the program shuts down every time I go to access information through web scraping (i.e. using the user_agent). I didn’t like the idea of using my user agent for a public domain, but I couldn’t find out how other people who web scrape go about this problem when a user agent is required for a public domain. Any advice!?