javascript – Como fazer a verificação de um link até ele ficar disponível e enquanto isso mostrar uma barra de load

Estou desenvolvendo um site para meu TCC, onde eu envio um arquivo para um serviço da AWS e preciso aguardar o link de download ficar disponível, já que o processamento que vai ocorrer com esse arquivo, leva alguns minutos para ficar pronto.

Eu gostaria de saber, como faço uma função em javascript para verificar se o link gerado pelo AWS já está disponível, e enquanto aguarda, mostra para o usuário uma tela de loading, para ele entender que o processo está ocorrendo.

python 3.x – ImportError: DLL load failed while importing QtWebEngineWidgets

I want to make a simple program using a website, based on this code::


but pressing ‘run’ without modifying anything of the code I get this error:

    from PyQt5.QtWebEngineWidgets import *
ImportError: DLL load failed while importing QtWebEngineWidgets: The specified module could not be found.

I have already installed: (I don’t have enough reputation to post images)

performance – Why does my site load slower on the first attempt from an incognito browser?

Chrome’s incognito window uses a fresh cache, so you could say that Chrome has two caches – a “normal” cache and an “incognito” cache. When you close ALL incognito windows and tabs, the incognito cache gets cleared. With default Chrome settings, the normal cache persists across windows and browsing sessions.

It takes extra time for your site to load in incognito because your browser is re-downloading a lot of the page’s resources that it would otherwise have already cached, because it’s working with a fresh incognito cache.

You can improve load times for repeat visitors by making sure your site’s resources are being served with long-life cache headers where appropriate, so the browser won’t re-download them for a long time.

You can improve load times for first-time visitors by making sure your site’s resources are lean, so don’t include tons of JavaScript, images, or other things that slow down page load – or load them on demand with e.g. the loading="lazy" image attribute or async and defer JavaScript attributes. This may come down to going lean on plugins.

You can improve load times for both by making sure your server hardware is fast (i.e. not on over-provisioned shared hosting) and your server software is lean (e.g. not bogged down by too many plugins).

java – Could not find or load main class App

Uso vscode y he copiado el codigo desde github para probar JavaFx:

public class App extends Application {
    public static void main(String() args) {
    public void start(Stage primaryStage) {
        primaryStage.setTitle("Hello World!");
        Button btn = new Button();
        btn.setText("Say 'Hello World'");
        btn.setOnAction(new EventHandler<ActionEvent>() {
            public void handle(ActionEvent event) {
                System.out.println("Hello World!");
        StackPane root = new StackPane();
        primaryStage.setScene(new Scene(root, 300, 250));;

sin embargo, me bota el siguiente error:

Error: Could not find or load main class App
Caused by: java.lang.ClassNotFoundException: App

Ya he probado agregarlo a classpath o algo así (me lo decía la extensión Java debugger)
Y también he probado editar settings.json agregando:

"java.project.sourcePaths": ("JavaFx Application/src")

Pero tampoco funciona.

assembly – GAS syntax bootloader load the next sector from floppy

I’m playing with the Real mode and was trying to code something when suddenly I exceeded the 510 boundary (.org 510) and as warning popped out. So I started reading about how to split the code into multiple sectors and load them from the disk using the floppy for simplicity.

I’m probably missing a lot of stuff one of which would be manual stack init, though I’m not sure if that would affect the loading as it’s coded here. I’m using -monitor stdio to check the values in registers if necessary with info registers. Seems sufficient enough for now, but feel free to suggest better debugging tools that work with Qemu.

Feel free not to hold back, I’d like to learn more. 🙂


#!/bin/sh -xe
as -o code.o code.s -g --statistics --warn --fatal-warnings
ld -o code.bin --oformat=binary -Ttext=0x7c00 --build-id=none code.o
qemu-system-i386 -monitor stdio -drive file=code.bin,index=0,if=floppy,format=raw


.global _start

// sector 1 begin
    mov $0x0e, %ah
    mov $'a', %al
    int $0x10

    jmp load_next_sector

    // set Buffer Address Pointer to 0
    mov $0x0, %bx
    // service 02h: Read Sectors From Drive
    mov $0x2, %ah
    // Sectors To Read Count
    mov $0x1, %al
    // Cylinder/Track number 1 (zero-based)
    mov $0x0, %ch
    // Sector number 2 (one-based)
    mov $0x2, %cl
    // Head 1 (zero-based)
    mov $0x0, %dh
    int $0x13

    // fail, AL codes:
    jc load_next_sector

    // success
    jmp sector_2

.org 510
.word 0xAA55
// sector 1 end

// sector 2 begin
    // dummy
    xor %cx, %cx
    mov $0x13, %cl

    mov $0x0e, %ah
    mov $'b', %al
    int $0x10
    jmp pause

    jmp pause

.org 1024
// sector 2 end

microservices – Should you expose API endpoints on an application that is under heavy load or delegate it to another application?

Its a question of ease of programming and deployment.

You have services A and B, when you run your application/solution as a whole A is under heavy load, B is under light load.

In order to prevent A being a bottle neck you want to add more CPU resource to A, but not to B as it would just go unused.

If you are using powerful multicore boxes you might as well put both A and B on all the boxes, and if you are good at multiprocessor programming, even have them in the same application. The OS will divide up the available processing power as per the needs of each application, some B’s will never get any load but it wont matter as the overhead of running a idle B is insignificant.

Your deployment is pretty simple because you just put all your micro-services on every box and just scale the number of boxes. If load unexpectedly increases on B you can maybe use up some slack in your A processing without having to worry.

If however you are using tiny containers that can barely handle a single A, you might want to consider the overhead of running B when its not going to get any traffic.

Or maybe each instance only has a single processor which will have to switch between working on A and B causing delays.

In this case you might find it better to separate A and B into different programs. then you have have boxes dedicated to A or B and scale them up independently.

Your deployment is more complex, but your code is simplified and its arguably more efficient resource usage.
If load on B increases or A decreases you have to worry about adjusting the scaling levels, rather than just having one big pool.

I’ve seen both done and I don’t think there is a huge difference either way tbh. I don’t think i would go as far as combining the two services in a single application though. Unless you were querying the live job being processed or something.

A central database T adds an extra wrinkle, as you obviously want to avoid it becoming a bottleneck in its self. But I think this is a common solution.

engineering – Where can i start if i want to program my own Load Balancer or networking stuff?

Hope you doing well, recently i´ve been really interested in Load Balancers, and how they work. I wanted to program my own Load Balancer starting from the basics, do you know any good source where i can find the necessary information to learn about it? Any recomendations?

Also i am very keen on these kind of programming topics, such as programming proxys or things related with networking, can you recommend me any path to follow?

Sorry if i have mispelled any words, i havent practised my English in a while :]


fedora – podman load executed in systemd service not taking effect on fcos system

Hi I’m new to FedoraCoreOs. But I’m trying a very simple probe of concept. I’m trying to load a .tar docker image into the fedora core os local registry at starup. I’m using systemd and a service which will perform the load, but I’m missing something, since the service is executed, but when I type podman images the imaget listed.

This is my unit configuration

cat etc/systemd/system/test.service


    Description=My custom service










The script being called is

cat /etc/rc.d/init.d/

#!/usr/bin/env bash

podman load -i /etc/files/docker.tar

When the image boots I check the status of the service and it is run ok
service status output

But after that if I check podman images the docker image is not listed.
Note: If I run the commands manually it works

Any ideas??

entities – How can I load an entity’s bundle object from a loaded entity?

If I want to get an entity’s bundle object, with a node, I can do something like:

$bundle_object = Drupal::entityTypeManager()

But this seems a little long-winded. Is there a way to get the object directly from an entity? Something like $entity->getBundleObject()?

child theme – Why does using wp_register_style without wp_enqueue load and print CSS tags in the front-end?

I have this piece of code that registers a CSS in a child theme and when visiting the front-end my_css is actually included. Why is that ? I thought that registering meant registering for later use to allow – among other things – conditional enqueuing of stylesheets/scripts. This shouldn’t be working without enqueuing my_css at some point, right ?

function register_my_styles()
    wp_register_style('divi_parent', get_template_directory_uri() . '/style.css', '', null, 'all');
    wp_register_style('my_css', get_stylesheet_directory_uri() . '/src/css/my_css', array('divi_parent'), null, 'all');

add_action('wp_loaded', 'register_my_styles');

Commenting out wp_register_style('my_css at least works 🙂 but the way it behaves prevents me from using conditional enqueuing since my_css is rendered on the front-end by just using wp_register_style.

Now the funky part is: I am using Divi 4.10.4 as a parent theme. When I downgrade for each version all the way to 4.0.1 then wp_register_style works normally and doesn’t output the CSS line on the frond-end.

Which would be fine but enqueuing also doesn’t work:

function register_my_styles()
    wp_register_style('divi_parent', get_template_directory_uri() . '/style.css', '', null, 'all');
    wp_register_style('my_css', get_stylesheet_directory_uri() . '/src/css/my_css', array('divi_parent'), null, 'all');

function enqueue_ecolo_styles()

add_action('wp_loaded', 'register_my_styles');
add_action('wp_enqueue_style', 'enqueue_my_styles');

What am I missing and misunderstanding here ?