Which web hosting can handle (+1000 clicks per day) ?

Hello,

I’m working on a site on which i’ll start the marketing campaign in two weeks from now, so i have some concerns with my current ho… | Read the rest of https://www.webhostingtalk.com/showthread.php?t=1824033&goto=newpost

Which web hosting can handle (+1000 clicks per day)?

I’m working on a site on which i’ll start the marketing campaign in two weeks from now, so i have some concerns with my current hosting EasyWP (Namecheap), it seems they are low in performance with a lot of limitations to their users.

For that reason, I’m considering migrating to another web hosting. I want my website to be ready for moderate traffic (+1000 click per day)

For those who are experienced and sure on this matter, I would to know your experts’ opinion, i’m considering HostGator Shared hosting will it be enough?

I’d love to know your suggestions on this matter. Thank you in advance for your valuable insights.

Query in Data Studio – Web Applications Stack Exchange

Stack Exchange Network


Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Visit Stack Exchange

javascript – JsPDF HTML2Canvas | Genera el PDF desproporcionado, ligeramente aplastado verticalemente respecto a la web

Genera el archivo PDF sin problema, pero lo crea de manera ligeramente desproporcionada (verticalmente ligeramente aplastado). ¿Cómo se puede corregir para que sea de proporción exacta o más fiel? Gracias de antemano.

Javascript:

<script type="text/javascript">

const { jsPDF } = window.jspdf;
  function genPDF() {
        html2canvas(document.getElementById("testDiv"),{
            onrendered: (canvas)=>{
                var pdf = canvas.toDataURL("image/png");
                
                var doc = new jsPDF("p", "mm", "a4");

            
var width = doc.internal.pageSize.getWidth();
var height = doc.internal.pageSize.getHeight();
                
doc.addImage(pdf, 'JPEG', 0, 0, width, height);
                doc.save('test.pdf');
            }
        });
    }
</script>

node.js – How to implement Http proxy in C# (.Net core ) to record Socket.IO traffic from a web site running in Chrome

I need to record Socket.IO data from a website running in a chrome browser, because I need to do further analysis on the data. I think some kind of web proxy should be used.

Ideally I would like a solution I can code in an .Net Core (.Net 5) C# application.

I know there exists something called Titanium-Web-Proxy and I have heard FiddlerCore mentioned. I am not sure whether any of these are suitable for recording Socket.IO from a C# program.

A solution based on Node.js could also be fine, or even a solution using a plugin in chrome or firefox could also be ok, but I would like to automate as much as possible.

So basically any advice on how to record Socket.IO data, preferably in an automated programmatic way would be much appreciated.

asp.net web api – .NET Core Dependency Injection – Worker Service x Web Api

Folks,

I have a big question about how .NET dependency injection native works on a Service Worker project.

We need to resolve a service(Scoped:IMyRepository) on constructor of an MediatR Handler, but we have an error at runtime, please see:

(Shared.csproj)

Repository Folder

using System;

namespace Shared.Repository
{
    public interface IMyRepository
    {
        void Add();   
    }

    public class MyRepository : IMyRepository
    {
        public void Add()
        {
            throw new NotImplementedException();
        }
    }
}

MediatR Folder

using MediatR;
using System.Threading;
using Shared.Repository;
using System.Threading.Tasks;

namespace Shared.MediatR
{
    public class PingQuery : IRequest<string> { }

    public class PingQueryHandler : IRequestHandler<PingQuery, string>
    {
        readonly IMyRepository _myRepository;

        public PingQueryHandler(IMyRepository myRepository)
        {
            _myRepository = myRepository;
        }

        public Task<string> Handle(PingQuery request, CancellationToken cancellationToken)
        {
            return Task.FromResult("Pong");
        }
    }
}

(WorkerService.csproj)

WorkerService.cs

using MediatR;
using Shared.MediatR;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;

namespace WorkerService
{
    public class Worker : BackgroundService
    {
        readonly IMediator _mediator;
        readonly ILogger<Worker> _logger;
        
        public Worker(ILogger<Worker> logger, IMediator mediator)
        {
            _logger = logger;
            _mediator = mediator;
        }

        protected override async Task ExecuteAsync(CancellationToken stoppingToken)
        {
            while (!stoppingToken.IsCancellationRequested)
            {
                var result = await _mediator.Send(new PingQuery());

                _logger.LogInformation("MediatR result: {result}", result);

                await Task.Delay(1000, stoppingToken);
            }
        }
    }
}

Program.cs

using MediatR;
using Shared.MediatR;
using Shared.Repository;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;

namespace WorkerService
{
    public class Program
    {
        public static void Main(string() args)
        {
            CreateHostBuilder(args).Build().Run();
        }

        public static IHostBuilder CreateHostBuilder(string() args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureServices((hostContext, services) =>
                {
                    services.AddMediatR(typeof(PingQuery));
                    services.AddScoped<IMyRepository, MyRepository>();

                    services.AddHostedService<Worker>(); <- Singleton Resolution
                });
    }
}

With this configuration, we got the following error:

(Exception Unhandled)
System.InvalidOperationException: ‘Error constructing handler for request of type
MediatR.IRequestHandler`2(Shared.MediatR.PingQuery,System.String). Register your handlers with the
container. See the samples in GitHub for examples.’

(Inner Exception)
InvalidOperationException: Cannot resolve
‘MediatR.IRequestHandler`2(Shared.MediatR.PingQuery,System.String)’ from root provider because it
requires scoped service ‘Shared.Repository.IMyRepository’.

Worker ServiceCollection

And now comes the other part of the question. If we make this same “configuration” on a Web Application project, the D.I can solve (Scoped:IMyRepository).

Like:

(WebApi.csproj)

Program.cs (I decided resolve the dependencies(MediatR, IMyRepository) in this class, just to give a closer look to the “Worker Service” D.I resolution)

using MediatR;
using Shared.MediatR;
using Shared.Repository;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.DependencyInjection;

namespace WebApi
{
    public class Program
    {
        public static void Main(string() args)
        {
            CreateHostBuilder(args).Build().Run();
        }

        public static IHostBuilder CreateHostBuilder(string() args) =>
            Host.CreateDefaultBuilder(args)
                .ConfigureServices((hostContext, services) => 
                {
                    services.AddMediatR(typeof(PingQuery));
                    services.AddScoped<IMyRepository, MyRepository>();
                })
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
}

Startup.cs

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;

namespace WebApi
{
    public class Startup
    {
        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {
            services.AddControllers();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
        {
            /*Omitted for brevity*/
        }
    }
}

Web Application ServiceCollection

WeatherForecastController.cs

using MediatR;
using Shared.MediatR;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;

namespace WebApi.Controllers
{
    (ApiController)
    (Route("(controller)"))
    public class WeatherForecastController : ControllerBase
    {
        readonly IMediator _mediator;
        
        public WeatherForecastController(IMediator mediator)
        {
            _mediator = mediator;
        }

        (HttpGet)
        public async Task<string> Get()
        {
            return await _mediator.Send(new PingQuery());
        }
    }
}

When we call the HttpGet of the Controller above, the D.I has been resolved on PingQueryHandler:

PingQueryHandler D.I solved

Why? I looked for similar cases, and most likely it involves the way the Worker Service solves the D.I (without creating scope), as mentioned here, and Jimmy Bogard(MediatR) mentions that MediatR needs the dependencies resolved in a scope here.

Is there a way to resolve this? Does anyone know the reason for the difference in dependecy injection between project types?

The solution would be to inject IServiceScopeFactory to solve my dependency, like mentioned on this S.O answer? I don’t know if this applies to a Service Worker too…

I’m a little confused now, and I would like to understand how things work, any help is welcome 🙂

Javascript to show folder and files by calling Web Api

    var divFolder = document.getElementById("rootFolder");
    var divFiles = document.getElementById("files");

    function createFolderElement(data) {
        var li = document.createElement("div");
        li.setAttribute('data-id', data.filePathId);
        li.setAttribute('class', "folder");

        divFolder.appendChild(li);
        li.innerHTML = data.folderName
    }

    function createFileElement(data) {
        debugger
        var li = document.createElement("div");
        li.setAttribute('data-id', data.id);
        li.setAttribute('class', "file");

        divFiles.appendChild(li);
        li.innerHTML = data.fileName
    }

    async function fetchFolder(parentId) {
        document.getElementById('spinner').style.display = 'block'
        divFolder.innerHTML = ''

        const response = await fetch(`https://localhost:44371/api/file/folder/${parentId}`);

        if (!response.ok) {
            const message = `An error has occured: ${response.status}`;
            $("div.message").text(message);
            $("div.message").addClass("message alert alert-warning");
            //throw new Error(message);
        }

        const files = await response.json();
        if (files.length > 0) {
            files.forEach((data) => {
                createFolderElement(data)
            })

            createFolderEventListener();

            document.getElementById('spinner').style.display = 'none'
        }
    }

    async function fetchFiles(filePathId) {
        document.getElementById('spinner').style.display = 'block'
        divFolder.innerHTML = ''

        const response = await fetch(`https://localhost:44371/api/file/files/${filePathId}`);

        if (!response.ok) {
            const message = `An error has occured: ${response.status}`;
            $("div.message").text(message);
            $("div.message").addClass("message alert alert-warning");
            //throw new Error(message);
        }

        const folders = await response.json();
        debugger
        if (folders.length > 0) {
            folders.forEach((data) => {
                createFileElement(data)
            })

            createFileEventListener();
        }

        document.getElementById('spinner').style.display = 'none'
    }

    function createFolderEventListener() {
        const myfolder = document.getElementsByClassName("folder");
        (...myfolder).forEach(function (element) {
            element.addEventListener("click", function () {
                fetchFolder(element.dataset.id);
                fetchFiles(element.dataset.id);
            });
        });
    };

    function createFileEventListener() {
        const myfolder = document.getElementsByClassName("file");
        (...myfolder).forEach(function (element) {
            element.addEventListener("click", function () {
                downloadPdf(element.dataset.id, element.textContent)
            });
        });
    };

    $(function () {
        document.getElementById('spinner').style.display = 'none'
        fetchFolder(1);
    })
    div.folder {
        border: 1px solid black;
        margin: 2px;
        cursor: pointer
    }

    div.file {
        border: 1px solid black;
        margin: 2px;
        cursor: pointer
    }
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="message"></div>
<img src="~/img/loading.gif" id="spinner" />
<div id="rootFolder"></div>
<div id="files"></div>

SeedVPS – SSD Web Hosting in Europe, Netherlands | cPanel | CloudLinux | LiteSpeed | From €5 | NewProxyLists

__

Plans starting from €5 EUR

Check out our web hosting plans here:

=====> https://www.seedvps.com/web-hosting <====

__

Our servers

  • HP Generation 9 Servers
  • Dual 2x Intel E5 CPUs
  • Pure SSD Storage
  • DDR4 ECC RAM
  • Enterprise HDDs / Datacenter SSDs
  • Hardware RAID10 Storage
  • 10 Gbps NICs

__

Features

  • CloudLinux OS
  • Latest cPanel
  • LiteSpeed Web Server
  • MariaDB
  • PHP Version Selector
  • Python and Ruby Selector
  • Let’s Encrypt SSL
  • Free Migration
  • DDoS Protection
  • Softaculous One-Click Software install (300+ Scripts)
  • Jetbackup – Free Daily Backup
  • Imunify360 Virus & Malware protection
  • Instant Setup
  • 7 Days Money Back Guarantee
  • 10Gbps Port
  • 99.9% Uptime Guaranteed

__

50+ INTERNATIONAL Payment methods: PayPal, Payza, Skrill, Credit/Debit Cards, iDEAL, Sofort Banking, Bank Transfer, Bitcoin and more.

Looking Glass: lg.nl.seedvps.com

Status Page: status.seedvps.com

SeedVPS is an established company operating since 2013

Visit our website https://www.seedvps.com

Contact us (email protected)

web crawlers – How do I get a list of ecommerce websites?

I’m planning to do an analysis of online stores. I can create a bot that will visit the websites and get required information. But how do I get a list of the online stores. I can’t find any obvious google query that will give me that list. Is there any other place to start from?

Any ideas would be greatly appreciated.

amazon web services – Nginx responding with 429s but I don’t have it configured in my config

I’m hosting a rails app on AWS elastic beanstalk and am noticing my nginx server returning 429s in the nginx log. The problem is, I’m not seeing where the limit_req is being defined so I’m struggling to understand why my nginx would ever return 429. Below is a copy of my nginx config.

(root@ip-172-30-1-227 current)# sudo nginx -T
nginx: (warn) conflicting server name "localhost" on 0.0.0.0:80, ignored
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user ($time_local) "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    index   index.html index.htm;

    server {
        listen       80 ;
        listen       (::):80 ;
        server_name  localhost;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
        }

        # redirect server error pages to the static page /40x.html
        #
        error_page 404 /404.html;
            location = /40x.html {
        }

        # redirect server error pages to the static page /50x.html
        #
        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }

}

# configuration file /etc/nginx/conf.d/proxy.conf:
client_max_body_size 20M;
large_client_header_buffers 4 32k;

# configuration file /etc/nginx/conf.d/webapp_healthd.conf:
upstream my_app {
  server unix:///var/run/puma/my_app.sock;
}

log_format healthd '$msec"$uri"'
                '$status"$request_time"$upstream_response_time"'
                '$http_x_forwarded_for';

server {
  listen 80;
  server_name _ localhost; # need to listen to localhost for worker tier

  if ($time_iso8601 ~ "^(d{4})-(d{2})-(d{2})T(d{2})") {
    set $year $1;
    set $month $2;
    set $day $3;
    set $hour $4;
  }

  access_log  /var/log/nginx/access.log  main;
  access_log /var/log/nginx/healthd/application.log.$year-$month-$day-$hour healthd;

  location / {
    proxy_pass http://my_app; # match the name of upstream directive which is defined above
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  }

  location /assets {
    alias /var/app/current/public/assets;
    gzip_static on;
    gzip on;
    expires max;
    add_header Cache-Control public;
  }

  location /public {
    alias /var/app/current/public;
    gzip_static on;
    gzip on;
    expires max;
    add_header Cache-Control public;
  }
}