I'm trying to scratch LinkedIn profiles for some job titles. When I cast it, it stops at 300 results each time. The search I'm trying to erase indicates that Google has produced 2 million results, but I can only get 300 results in total.
Here are the details:
URL I'm scraping: Site: linkedin.com (inurl: inurl: pub) -intitle: directory -inurlalaries -inurl: dir -inurl: Jobs "Director of Information Technology" Note: The article published on the forum converts "wages of the colon" into emoji.
Proxies: Using proxies collected via the SB proxy manager. At the moment, 168 agents have passed the Google proxy test.
I've tried using the detailed harvest and the custom harvester. Same result. I've also reloaded the parameters of the custom harvester to see if anything had changed.
Anyone have any suggestions?
What would the syntax look like to subtract the Time_Exit – Time_Entry columns and add the results of each row in the column duration? I am using SQL Server 2012