c # – What resources in Azure would be suitable for handling hundreds of millions of recorded and polled events?

I have been responsible for logging all events into our software in my company, with the ability to load all events from one specific entity at a time.

It would be a GUID like ID, with a little message explaining the event, and a sender property, which is the program that sent it.

My current proposal is to use a storage queue to send events to, an Azure function with a trigger to handle said events and insert them into a storage table, then query the table for events relating to a single entity.

enter description of image here

Would this be a good use of these resources and would it be able to handle so many messages?

Some additional information:

Size of the entities: Very small, under 64 KB each, I would say.

How many inserts at a time: I will insert them one at a time, as they have different partition keys, so I cannot use bulk insert.

Will they all share a partition key, or will they be unique: An entity can have multiple events attached, so there can be multiple events with the same partition key, but it is also very possible to have only one partition key for each event, if the 39 entity has only recorded one event.

If you have come this far, thank you for taking the time to help us. It is very much appreciated!

Is it a bad SEO practice to link to hundreds of internal pages on one page?

It is not bad for SEO to link to other pages. In fact, internal links are good for SEO. You make it easier for users to find what they're looking for and Google loves it. That said, only link the relevant pages. 200 pages seem to me a lot. Choose the most important link for you and the users. Also take a look at your URL structure and link to the pages above / below the current page. Example:

example.com/landing-page Links to example.com/landing-page/linking

example.com/landing-page/linking Links to example.com/landing-page and links to example.com/landing-page/linking/internal

example.com/landing-page/linking/internal Links to example.com/landing-page

Another reason not to link to too many pages is the amount of link juice you are spreading then. Decide for yourself which are the most important pages (that you want to be ranked well in Google), these pages deserve to have more links.

I hope this has helped you!


Video call software with hundreds of people and twelve people able to speak

I need a software which can organize a public speaking contest for a school where there should be 12 speakers + 3 referees with video and speaking ability and about 300 others who cannot what to listen to. Anyone have any ideas?

Google says "duplicate store codes" when updating the hours of operation of hundreds of shopping sites via CSV

I am trying to change the hours of our pitches (206 pitches). The hours will all change the same for all stores. I have uploaded our locations to CSV, made the changes, but when I try to upload them, it indicates that it has duplicate store codes. Has anyone ever done this? I read on Google documents that bulk editing was available.

Does anyone have a step by step for this process?


enter description of image here

design – What is a good software architecture for quickly processing hundreds of thousands of images?

I have a dataset of millions of images in block storage that I want to analyze for anomalies. We have already written the code to process a single image, we are just struggling to find the best data pipeline framework to execute this code in parallel on all of our images as quickly as possible.

What is a good framework for this type of work?

Trump says hundreds of thousands of people living with CoronaVirus are cured by just sitting down and even going to work! True or false?

True. For most healthy adults, coronavirus will just be another cold.

Children appear to be particularly immune and have minor symptoms easily mistaken for a cold. In fact, children represent less than 2% of patients.

It is people over the age of 60 and those with preexisting conditions like heart disease, lung disease and diabetes who have higher death rates.

looking for a web browser that doesn't use a lot of memory when hundreds of tabs are open

I often have hundreds of tabs open in my web browser, but I am looking for a web browser that can handle this more intelligently so that it does not use more memory than needed for the currently visible tab, and ideally has some sort of hierarchy for tabs depending on when they were opened and which website.

Is there such a web browser? Or is someone working to develop this?

agile – Make hundreds of cosmetic code changes at the last minute

A programmer continues to make cosmetic changes to the code when we have a strict deadline and the contract states "no changes to the existing code". I wonder where this "attitude" comes from: DevOps? Agile?

Modifications made: 1. Replacement of explicit variables with "var" 2. Renaming of short variable names with longer names 3. Refactoring of code injections in MVC controller classes 4. Adding of design models (like the command patttern) to the existing code (without changing functionality) 5. Adding constructors with parameters to ViewModel classes (forgetting to add one that is not a parameter, so the publication breaks …)

Hundreds of post-test changes were made, which made the merger much more complicated.

Is it agile?

post-processing – How to align hundreds of images?

I could not find a solution with raw files, I know that the recent version of hugin is supposed to support raw files using dcraw but I cannot test it myself. The next good solution in my opinion is to convert all of your raw data to tif or other lossless image format and use it.

For my method I mainly hugin_tools on the terminal, but I also use the GUI, so keep in mind that you need to install both.
You can find the hugin costume here for download.
I also use Ubuntu on my machine, but the procedure on mac should be the same.

First thing you need to put all the photos in one folder, it is mainly for convenience, but will help you with later orders.

Generate a pto file

Next, you need to generate a pto file, i.e. the file where all the image transformation data is saved.
You can do this by typing:

pto_gen *.jpg

in the terminal where all the photos are located.

Cropping in a still area

You now need to find matches between all of your images, before doing so there is an optional part which will greatly reduce errors if there are moving objects in your set.

Open the hugin project you generated and change the interface to advanced.

interface change

In the window that opens, go to the masks tab and choose the cropping tab and select your first image. Now make sure that "all images of the selected lens" are checked, then drag them from the edges of the image and crop to the area with the least or no movement. This will restrict the match finder to this area and reduce errors during the remapping step. You can now save and exit the program.

image cropping

Find checkpoints

Now you can find the matches, type:

cpfind --linearmatch *.pto

in the terminal, this will find matches between pairs of images.
You can also find matches between all the images by simply omitting the ‘linearmatch’ option, but if you have hundreds of images, it will take a long time and will probably be useless.
You can read more about cpfind here.

Then you need to clean up the checkpoints you found, there are two commands for this:

celeste_standalone -i default.pto -o default.pto

will try to clear the control points on the clouds (More information here).

cpclean -o default.pto default.pto

will delete checkpoints with a high error value. (cpclean help page)

Reset cropping

Now that we are done with the checkpoints, open the new generated project called by default and go back to the masks tab as before, select the cropping tab again and click on the reset button, this will turn off cropping of all images.


After that, you need to optimize the checkpoints. Type:

pto_var --opt="y, p, r, TrX, TrY, TrZ" -o default.pto default.pto
autooptimiser -n -o default.pto default.pto

This will optimize the position and distortion of your set of images, you can read more about this process here.

You're almost done, just type:

pano_modify -o default.pto --projection=0 --fov=AUTO --center --canvas=AUTO --crop=AUTOHDR --output-type=REMAPORIG default.pto

to modify the project configuration.


Finally, simply type:

nona -m TIFF_m -o remapped default.pto

to output the remapped images.
If you also want to stack the images together, you can also use:

hugin_stacker --output=median --mode=median remapped*.tif

You can find more information about the hugin command line tools here.

There you go, my take on your problem, there are some missing links because this is my first answer, so I was limited to 8. If there are any errors or if you have any problems, let me know, my English is not the best sorry if I have made grammatical or spelling mistakes. Cheers!

phpmyadmin – Mass deletion of hundreds of WooCommerce attribute taxonomies

I am trying to remove all attribute taxonomies, I have used it to remove products from these attributes, but taxonomies still exist under Attributes. I have 1,243 that I would like to delete.

DELETE FROM wp_terms WHERE term_id IN
(SELECT term_id FROM wp_term_taxonomy WHERE taxonomy LIKE 'pa_%');
DELETE FROM wp_term_taxonomy WHERE taxonomy LIKE 'pa_%';
DELETE FROM wp_term_relationships WHERE term_taxonomy_id not IN 
(SELECT term_taxonomy_id FROM wp_term_taxonomy);

How can I also remove the attributes from the database?