iPhone downloading large data

Why do iPhones in my house periodically use all download bandwidth? It happens at least once a week, probably once a day or something like that.

I only notice when I am bored by slow internet.

How can I stop this?

Kernel crash when saving / recovering large files

I've encountered crashes in Mathematica 11.0.1 on a machine at work. I've spotted the problem when Mathematica uses save[fileName] or Get[fileName], but sometimes the crash occurs a few minutes after loading a file. The files I use are quite large (350 to 650 MB) and contain earlier recorded variables. Crashes seem to happen randomly, that is, sometimes the file is loaded correctly and the script ends, but sometimes it hangs with the same thing. I've checked the logs and whenever there was a crash, I was the only one on the machine to run mathematica or something else.

What is strange is that on my laptop (Mathematica v.12), I encountered no problem, and my officemate either (same version of Mathematica than the server).

Is there an option that could have been modified on the server and that could explain these problems? I do not have root access on the server.

backup – How to export a large WhatsApp conversation with a media exactly as it appears on my phone?

I want to export a huge WhatsApp conversation (346,000 messages) with media (photos, videos and voice notes) on my computer, as it appears on WhatsApp.

There is a Chrome extension that does exactly what I want by downloading the conversation from WhatsApp Web to HTML. But, it hangs around 100,000 texts. Is there another possibility?

I can not export it directly via WhatsApp because it is limited to 40,000 texts. Plus, my phone is not rooted, so extracting SQLite files is not an option.

(Technical answers are strongly encouraged)

Iteration – An effective way to go through a large list and check the conditions

The problem I am trying to solve is how to assign loads to a number of particles according to certain load conditions. It is very simple conceptually but requires processing a very large list. There are four possible charges: (0,1,2,3), assigned to 19 particles, so there is a total of $ 4 ^ {19} $ lists of length 19 to complete. My current attempt is using

Length[Select[Tuples[{0, 1, 2, 3}, 19], C1]]

(C1 is a condition), which seems to store all lists in memory and crash my computer. Since I only need to check the number of possible assignments that meet the requirement, I do not need to store a list in memory. What is the most effective way to go through this very long list and solve the problem in a reasonable time? Thank you in advance.

ps. The conditions I want to check are simple: the last three particles with the same charge or the sum of the charges of some particles are zero. For example, C1 is

C1[charges_]: = Equal @@ fees[[{-1,-2,-3}]]

Raycasting – What is the most effective way to implement accurate ray selection for scenes with large mesh?

I'm wondering how to implement ray selection in the most efficient way for scenes with very large stitches (> 1 million faces)? Right now I'm using BulletSharp physical envelope with TriangleMesh that works but is not very fast … Sometimes bulletsharp crashes because the allocator can not mallocer a large enough memory block in an adjoining zone. 32bit process.

I am looking for alternatives to BulletSharp in C #. I found that the Helix Toolkit used a number of bytes per inner mesh to perform the radius selection, but did not provide a collision test between the meshes, which I would have also need at some point.

My current conclusion is that it is best to use the Helix Toolkit method for ray selection and a custom method for the mesh collision test because I do not need body simulations rigid.

postgresql – Postgres speeds up index creation for a large table

I have a large Postgres table with over 2 billion entries (1.5 TB) and mostly non var columns, char var. To speed up the insertions, I deleted the indexes before bulk uploading. However, creating b-tree indexes now takes forever. For one of the tracks that I cut short, it took more than 12 hours to create the indexes.

Sample table and index that I am trying to create:

                                Column | Type | modifiers
----------------------- + -------------------------- --- + -----------
name | character variant | not zero
id | character variant |
lifecycle_id | character variant |
dt | character variant |
address | character variant |

"name_idx" PRIMARY KEY, btree (name)

"id_idx" btree (rec_id)

"lifecycle_id_idx" btree (lifecycle_id)

The current table has 18 columns. I've set the maintenance_work_mem to 15 GB. It works on Postgres 9.6.11 on RDS. The instance class is db.m4.4xlarge.

Since there are three indexes, it would be difficult to sort the data before inserting them. Would it be faster to simply insert the data without deleting the indexes? Any other suggestions to speed up index creation?

excel – Extracting data from very large csv files

I have a 40 GB csv file with over 60 million rows for data analysis. Each line has a unique identifier (some numbers). For example, the unique identifier of the first line will repeat approximately 150,000 lines later.

I would like to have a method to browse the whole file, extract the lines with the same identifier and write them in new csv files. Is there a good automated way to do it? Please note that the file is very large and that Excel has problems opening it.

co.combinatorics – Limits on the chromatic number when the maximum degree is large

For a regular chart with $ n $ peaks and maximum degree $ Delta $, it is easy to see that the chromatic number, $ chi le frac {n} {2} $ if $ frac {n} {2} the Delta lt n-1 $(from a regular chart on $ n $ vertices with maximum degree $ n-2 $ is the complete graph with a removed factor, which will have each vertex not adjacent to another single vertex, to which we could give the same color, using the handshaking lemma we get the chromatic number of such a graph $ frac {n} {2} $)

How could this be applied to relate the chromatic number of any non-regular graph to a high maximum degree? Does this fact have a well-known name, like Reed's theorem or Brooks's theorem? Thank you in advance.


BlackHatKings: Proxy Lists
Posted by: Afterbarbag
Post time: June 12, 2019 at 12:06.


BlackHatKings: Proxy Lists
Posted by: Afterbarbag
Post time: June 11, 2019 at 22:44.