video – After upgrading iOS 12 to 13, Photos.app corrupts large .MOV files

I have an iPhone 8. I recently upgraded from iOS 12.4.1 (if I remember correctly) to iOS 13.3.

The very first time I tried to scratch photos from my iPhone 8 on my MacBook Pro, it got stuck. The Apple Photos.app process used very little CPU, it used very little I / O (it should normally be at its maximum) and it was not progressing.

I quickly realized that it crashes if the "new batch" of photos contains a .MOV file that is "large enough", but that is about ONE MINUTE.

Not only should I force close, but the .MOV file is corrupted. I didn't realize it because I don't usually watch them, or not so soon. It is not very corrupt, they play in VLC, but they do not play in Photos.app.

So it will get a .MOV file, use all the disk space and produce a slightly borked .MOV file. Then usefully delete the original.

The stupidest possible solution is to put the longest videos in a separate "album", in the same way that iOS 13 added an album for screenshots (which makes me upset, they should all be "camera roll").

Another stupid workaround that works with iOS 13 but not iOS 12 is to use AirDrop for large videos. Which shows how they spend the money.

I'm curious to know if other people have this problem. The layout of the photos has changed a lot and I am not surprised.

ruby – Returns records from a large database based on a non-uniform attribute

I have a "cars" database table with millions of records. I want to search for the VIN (vehicle identification number) for each car and return a TABLE of cars that at least partially match the VIN request.

WINES are not uniform. In the DB, they look like RGNN3347382473, 3483FHJEREHJ430, HSDFfhdjfe3434023482, etc. Only the figures are significant.

The VIN query will only be numbers. If the last 5 digits of the VIN attribute match the query, it is considered a match and this car must be returned.

if params(:vin)
  Car.find_in_batches do |cars|
    if car = cars.detect { |car| car(:vin).delete("^0-9").include?(params(:vin)(-5..-1)) }
      render json: (car), status: :ok
    end
  end
else

The solution should work for SQLite and PostgreSQL.

How can I save favorite locations in a large Word document?

I am working with a very large technical document and there are a few sections that I should continue to browse. I can do it from the index but even this is boring. Is there a way to set personal favorites / bookmarks? I know Word supports "bookmarks", but these are actually inserted into the document because I no longer understand them as a reference.

What is the best way to achieve what I want, so that I can see a list of my favorite places and jump between them very quickly?

physical – Why is the caps lock key often so large on a keyboard?

It seems that the caps lock key is not a frequently used key on a computer keyboard, but on all keyboards I have used, it offers a significant amount of space. 39 interface, often as much as the backspace key and more than quite a few other keys such as delete, arrow keys and escape.

The caps lock key is used in some specialized applications: engineers use it, as do people with poor motor control of their hands, and I imagine graphic designers who often use capital letters for reasons Stylistics also benefit from its existence, but even in these cases, it is still a rarely used key.

Why is it so big? Should it be resized or placed in a different position on the keyboard?

performance – Acceleration of the fitting of the linear model on complete observations in pairs in a large sparse matrix in R

I have a digital data.frame df with 134946 rows x 1938 columns.
99.82% of the data is NA.
For each pair of (separate) columns "P1" and "P2", I have to find which lines haveNA values ​​for both, then perform some operations on these lines (linear model).

I wrote a script that does this, but it seems pretty slow.

This post seems to be discussing a related task, but I don't immediately see if or how it can be adapted to my case.

Take the example of this post:

set.seed(54321)
nr = 1000;
nc = 900;
dat = matrix(runif(nr*nc), nrow=nr)
rownames(dat) = paste(1:nr)
colnames(dat) = paste("time", 1:nc)
dat(sample(nr*nc, nr*nc*0.9)) = NA

My script is:

tic = proc.time()

out <- do.call(rbind,sapply(1:(N_ps-1), function(i) {
  if (i/10 == floor(i/10)) {
    cat("ni = ",i,"n")
    toc = proc.time();
    show(toc-tic);
  }
  do.call(rbind,sapply((i+1):N_ps, function(j) {
    w <- which(complete.cases(df(,i),df(,j)))
    N <- length(w)
    if (N >= 5) {
      xw <- df(w,i)
      yw <- df(w,j)
      if ((diff(range(xw)) != 0) & (diff(range(yw)) != 0)) {
        s <- summary(lm(yw~xw))
        o <- c(i,j,N,s$adj.r.squared,s$coefficients(2),s$coefficients(4),s$coefficients(8),s$coefficients(1),s$coefficients(3),s$coefficients(7))} else {
          o <- c(i,j,N,rep(NA,6))
        }
    } else {o <- NULL}
    return(o)
  },simplify=F))

}
,simplify=F))

toc = proc.time();
show(toc-tic);

It takes about 10 minutes on my machine.
You can imagine what happens when I have to manage a much larger (though more sparse) data array. I never managed to finish the calculation.

Question: do you think it could be done more effectively?

The point is, I don't know which operations take longer (subset of df, in which case would I remove duplicates of this? adding matrix data, in which case would I create a flat vector and convert it to a matrix at the end? ...).

Thank you!

display – using an ultra large LG 34um94-p monitor with a mini mac in late 2012

I have this cool new LG monitor that I mounted and plugged into my Mac Mini (late 2012, High Sierra), but the LG monitor is not receiving a signal. I tried HDMI and a mini screen to HDMI adapter.

How do I get my Mac mini to talk to my LG monitor?

gm techniques – RPG format for a large group campaign in one go

Can anyone recommend an RPG format that would allow me to run:

  • a one-off campaign (~ 6h)
  • with around 15 inexperienced players
  • that would allow me to add a little "moral" to the trip

They would only play this time, so I'm trying to find something that requires little or no character building, so we can play directly, and where the characters have few rules and actions, so that they are not as lost and it is easy to understand.

I've been thinking about launching a murder mystery campaign (and just giving them some sort of powers / actions to solve through the mystery) but the only RPG I have experienced is D&D, and it’s ; is a little difficult to go over all the different formats and try to understand what would work.

paginate on a large number of postgresql lines in a sorted order

The goal is to recover several million rows ordered by their time stamp. Since there are so many rows, I have to get them in pieces. I read about pagination of the results, but all of the solutions I saw required re-sorting the data for each recovery.

In addition, the data is almost sorted in the database and on disk.

Is there a standard way to handle this?

The solutions I see look something like:

SELECT * FROM mytable WHERE timestamp > old_timestamp ORDER BY timestamp LIMIT 100000;

After getting the results, we then change old_timestamp and repeat. This basically seems to sort the whole table for each page. Is there a good way to handle this with Postgres? Or should I use a more specific time series database?

EDIT: I created an index on the timestamp but it is still slow.

ANSWER: It turned out that the slowness was actually due to another WHERE clause. What I published was just a subquery. After reorganizing the declarations, it's very fast.

New large PTC Group site to earn money on the Internet

The advertisement

you don't

Advertise just about anything here, with CPM banner ads, CPM email ads, and CPC context links. You can target relevant areas of the site and show ads based on the user's location if you wish.

Starts at just $ 1 per CPM or $ 0.10 per CPC.

MySQL slows performance when UPDATING with IN on large dataset

I need to update a main dataset through various sources which provide some of the existing records with some modifications, such as a new mobile phone number.

The execution time of each request is more than 10 hours.

Environment: MySQL 8, 8-core CPU, 32 GB of memory.

I have the following main table, it holds 3M atm records:

CREATE TABLE `contact_data` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `email` varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NOT NULL,
  `email_status` tinyint(3) unsigned DEFAULT '0',
  `mobile_phone` varchar(32) COLLATE utf8mb4_general_ci DEFAULT NULL,
  `firstname` varchar(128) COLLATE utf8mb4_general_ci DEFAULT NULL,
  `lastname` varchar(128) COLLATE utf8mb4_general_ci DEFAULT NULL,
  `nickname` varchar(128) COLLATE utf8mb4_general_ci DEFAULT NULL,
  PRIMARY KEY (`email`),
  UNIQUE KEY `id` (`id`),
  KEY `country` (`country`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_general_ci

So far, I have tried to update in different ways. Most source tables have only 10,000 to 100,000 records. I have tried the same thing with MyISAM and "id" as the primary key.

Join:

UPDATE contact_data cd
LEFT JOIN (SELECT email, firstname FROM source2 WHERE firstname <> '' GROUP BY email ORDER BY id DESC) AS t2
ON cd.email = t2.email
SET cd.firstname = t2.firstname

Direct:

UPDATE contact_data SET mobile_phone = (SELECT phone FROM source1 WHERE email = contact_data.email ORDER BY id DESC LIMIT 1) 
WHERE mobile_phone IS NULL 

Live with record limit:

UPDATE contact_data SET mobile_phone = (SELECT phone FROM source1 WHERE email = contact_data.email ORDER BY id DESC LIMIT 1) 
WHERE mobile_phone IS NULL 
AND email IN (SELECT DISTINCT email FROM source1)

config:

innodb_buffer_pool_size = 16G
innodb_log_file_size = 512M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_log_buffer_size = 10M

key_buffer_size = 512M

More adjustment of the configuration file would mainly bring a small improvement. I don't know if there is a better way to handle this or if I should move to another database.