php – large website the best way to create a multi-language?

I work in a final academic project, such as an admin panel with huge displays and processes, and one requirement is the need to support multiple languages. Spanish English.

The project is based on: https://adminlte.io/blog/free-admin-panels

and the web server is LAMP, do not use the php framework, use pure php, html, javascript, jquery, mysql.

https://www.youtube.com/results?search_query=large+web+site+best+way+to+create+a+multi-language%3F

but I think this website is too big to use the normal way of translations, any suggestion ??

I am aware that the question is subjective but there are so many options that when I go to see it, it remains for simple things and not for an administrative system, the other disadvantage is that the screens viewed are managed by ajax, they are supposed to be translated from the server (executed by php).

2010 – Calculated field SharePoint Unable to update the value of all items in the large list

So that's the problem:

I have a list with about 1 million records

I added a calculated column, with a very simple formula:

"0" + (textfield)

and return it as a number

The request returned an update conflict error, but when I looked at the list, I found that the items were updated from oldest to newest with the new calculated field value. . but some time later, it stopped leaving more than half of the elements without getting their calculated value, which returns them in the form -4XXXXXXX (I think it means a null number)

I checked the logs and discovered that for 3 hours every 20 seconds, the SUL continued to log the following error:

A large block of literal text has been sent to SQL. This can lead to SQL blocking and excessive memory usage on the front end. Verify that no binary parameter is passed as literal and consider splitting batches into smaller components. If this request is for a SharePoint list or list item, you may be able to resolve this issue by reducing the number of fields.

And then finally a query timeout error log having the same correlation ID as the previous error

Is there a sure way to add this calculated field?

Do I need to increase a timeout variable to give the calculated field enough time to calculate all the elements?

How can I recalculate the remaining fields without affecting the modification date and modification date of the elements?

Thank you!

film – What is the cause of these large white spots on my developed negatives?

These spots appear to be typical dust spots. I meet them regularly and are usually quite easy to remove in Lightroom or Photoshop. In the dark room, it's more complicated. After careful pre-exposure to dusting, the dust may still have (significant) stains on the print. Here, retouch kits are used.

You will never get rid of dust. You've said it well, it's a real scourge. Depending on how the film was handled and stored, you will get more or less dust and the distribution may also vary from one image to the other. There are, however, some ways to reduce the amount of dust in your scans and prints:

  1. If you develop a film yourself, let it dry in a generally damp area, such as the bathroom. The moist air retains much of the dust that floats freely in the air and lets it settle on the ground so it does not fall on your film.
  2. Use pressurized air tanks to remove dust from the film strips and the scanner glass. Alternatively, you can use hand-held dust blowers.
  3. Use microfiber clots to wipe the strips and the glass clean. Static wipes are a good option. I know that Ilford uses or has used to sell them.

If you do not know if it's dust or other artifacts, it's always useful to return to the negative or positive points. Check these points to help determine the cause of the artifacts you see.

EDIT

The trails you are referring to seem light bromide drag. Bromide drag can occur when the bromide in the film emulsion dominates the developing agent. Solutions: reverse the tank properly during development (so avoid using the rotating stick) or use a more powerful development solution.

sharepoint online – Moving large files with modern authentication

I'm trying to move files from one site collection to another. The following code works for smaller files but not for large files because of memory exceptions:

if (item.FileSystemObjectType == FileSystemObjectType.File)
{
    var fileName = item("FileLeafRef") as string;
    var fileSize = item("File_x0020_Size");


    item.Context.Load(item.File);

    using (var stream = item.File.OpenBinaryStream().Value)
    { 
        item.Context.ExecuteQueryWithIncrementalRetry(3, logger);

        var fi = new FileCreationInformation();
        fi.ContentStream = stream;
        fi.Url = fileName;
        fi.Overwrite = true;
        folder.Files.Add(fi);
        destLibrary.Context.ExecuteQueryWithIncrementalRetry(3, logger);
    }
}

Is it possible to do the same thing in batches? Note that SaveBinary, etc. can not be used with modern authentication.

unit – Split or not split large 3D objects?

I have a pretty simple question for which I am looking for advice:

Should large 3D objects be split into smaller objects?

By wide, I mean an object that would be as wide as a level of play, below a mountain, the first image shows it in the form of a single mesh, the second one. Picture shows it in the form of several meshes.

enter the description of the image here

enter the description of the image here

Facts I've considered (maybe wrong):

  • a large object simplifies the hierarchy of the scene with fewer objects in total, but can at the same time be rendered more often than necessary because it will be considered visible.

  • a large object composed of smaller objects makes the hierarchy of the scene heavier, but at the same time, a high-performance aspect is preferable, because only the visible parts of it will be rendered by the camera, the rest being eliminated .

Note that these objects are not complex by today's standards, this mountain has less than 1,000 triangles and a full level is likely to be less than 30,000 triangles.

Ideally, I would like to have as few objects as possible in the scene hierarchy for it to remain simple, but I also wonder if simplifying the level is likely to pose additional problems that I did not have. not thought.

partitioning – MySQL is working with a very large table growing

I have a table that grows daily and gets very big very quickly.
How can I handle this kind of data.
can I partition the table, and if so how?
here is the structure of the table:
CREATE TABLE IF NOT EXISTED checks (
id int (10) NOT NULL AUTO_INCREMENT,
environment char (10) NOT NULL DEFAULT & # 39; 0 & # 39;
CheckName char (50) NOT NULL DEFAULT & # 39; 0 & # 39;
CheckDate varchar (100) NOT NULL,
SeverityNumber int (11) NOT NULL,
SysName varchar (50) NOT NULL,
SysType NOT NULL text,
ResultTable varchar (18000) NOT NULL,
maxSeverety int (11) DEFAULT NULL,
appl_code int (11) DEFAULT NULL,
appl_model int (11) DEFAULT NULL,
CheckMax int (11) DEFAULT NULL,
elapsed decimal (20,10) DEFAULT NULL,
cpu decimal (20,10) DEFAULT NULL,
manual tinyint (4) NOT NULL DEFAULT 0,
closedBy varchar (50) NOT NULL DEFAULT & # 39; 0 & # 39;
closedDate timestamp NOT NULL DEFAULT current_timestamp (),
CheckGroup char (3) DEFAULT NULL,
dateInt int (6) NOT NULL,
PRIMARY KEY (id,dateInt)
Single key Index 2 (SeverityNumber,CheckName,CheckDate)
Single key Index 1 (id)
) ENGINE = InnoDB DEFAULT CHARSET = latin1 HOW = OwaspAvilability r n & nbsp;

ideas??

posts – The wp_posts table is extremely large

my posts The table in my WordPress database has a size of almost 1 GB while it has only 3,409 publications, which is significantly larger than the other WordPress databases I have. None of the lines in the posts table seems to have particularly long entries. I've checked the maximum length of each column of the table as follows:

select max(char_length(`ID`)) from wpwn_posts;
select max(char_length(`post_author`)) from wpwn_posts;
select max(char_length(`post_date`)) from wpwn_posts;
select max(char_length(`post_date_gmt`)) from wpwn_posts;
select max(char_length(`post_content`)) from wpwn_posts;
select max(char_length(`post_title`)) from wpwn_posts;
select max(char_length(`post_excerpt`)) from wpwn_posts;
select max(char_length(`post_status`)) from wpwn_posts;
select max(char_length(`comment_status`)) from wpwn_posts;
select max(char_length(`ping_status`)) from wpwn_posts;
select max(char_length(`post_password`)) from wpwn_posts;
select max(char_length(`post_name`)) from wpwn_posts;
select max(char_length(`to_ping`)) from wpwn_posts;
select max(char_length(`pinged`)) from wpwn_posts;
select max(char_length(`post_modified`)) from wpwn_posts;
select max(char_length(`post_modified_gmt`)) from wpwn_posts;
select max(char_length(`post_content_filtered`)) from wpwn_posts;
select max(char_length(`post_parent`)) from wpwn_posts;
select max(char_length(`guid`)) from wpwn_posts;
select max(char_length(`menu_order`)) from wpwn_posts;
select max(char_length(`post_type`)) from wpwn_posts;
select max(char_length(`post_mime_type`)) from wpwn_posts;
select max(char_length(`comment_count`)) from wpwn_posts;

None of these shows excessively long maximum lengths or anything that comes out of the ordinary – the longest message contains just over 10,000 characters. I've included a screenshot of phpmyadmin. What could cause that? I've had a problem with a malware infecting my server, so I want to make sure that there is nothing wrong with it.

enter the description of the image here

SQL Server 2014 – Changing the clustered key on a large table

I have a big table of facts in a data warehouse (40 GB of data in dev, 120 GB in production) that was created in a very classic way

CREATE TABLE BigFactTable
( BigFactTableId bigint IDENTITY PRIMARY KEY (PK.BigFactTable)  CLUSTERED,
  TreatmentId  int FOREIGN KEY (FK.BigFactTable.TreatmentID)
      REFERENCES Treatment(TreatmentId),
 --other columns  
)

But I noticed that this table is queried mainly with the Current Processing field.

One of the most noteworthy properties is that TreatmentId is growing along with BigFactTableId, functioning as chapters of a great technical book.

Setting an index on (TreatmentId) with the included fields blocks the indexspace

  • 24 GB in development
  • 96 GB in production

Goal

Reduces table clutter while maintaining or improving query / insertion performance

What I've tried

In dev, delete the indexes and the previous clustered primary key, to rebuild the unique key associated with the now unclustered primary key, and then to the right clustered index

CREATE CLUSTERED INDEX (CI.BigFactTable) on (BigFactTable)
(TreatmentId,BigFactTableId)

The only problem: it took 1 hour and 40 minutes, when the system was very busy. But

Questions

  • What is the preferred approach for this?
  • Should I create a new table with the right cluster key right from the start and insert the contents of the previous table in the correct order, then use a delete and rename approach?
  • Should I consider partitioning along TreatmentId ?
  • Another suggestion?

automation – The fastest way to create an average and standard deviation data frame per time variable on large dataset partitions?

I want to generate averages and standard deviations hour by hour on different subsets of data partitioning the dataset.
In a small set of data, this is simple, just run the code I have below as an example.
In a large dataset, my method is not efficient. create a billion A-Z variables, exhaust the alphabet, … to store all the partitions of my data and repeatedly typing in the subset () function's criteria is slow.
I'm trying to find a way to automate what I'm doing using purrr or other packages.

I have reviewed the package "purrr" and I do not know how to use it.
I can afford to do the same thing to calculate averages on subsets of data.

Example 1
Here is another example reproducible without external links.

library(dplyr)
install.packages(geesmv)
library(geesmv)

data("toenail")
a<-subset(toenail,Treatment==0 )%>%
  group_by(Month)%>%
  summarise(m=mean(Response),s=sd(Response))


b<-subset(toenail,Treatment==1)%>%
  group_by(Month)%>%
  summarise(m=mean(Response),sd=sd(Response))

I can not link a and b because all months in which treatment group # 0 and treatment group # 1 are not equivalent. But I could copy and paste the databases a and b in Excel for my needs.

Example 2: gives an example where I can link partitioned data frames

reproducible example using the cd4 dataset:
http://www.mediafire.com/file/clk8c421ng7o26k/cd4.csv/file

#use the cd4 dataset
r<-read.csv(file.choose())

library(dplyr)
a<-subset(r,group01==0 & age<30)%>%
  group_by(week)%>%
  summarise(m=mean(logcd4),sd=sd(logcd4))
a1<-subset(r,group01==0 & age>=30)%>%
  group_by(week)%>%
  summarise(m=mean(logcd4),sd=sd(logcd4))

b<-subset(r,group01==1&age<30)%>%
  group_by(week)%>%
  summarise(m=mean(logcd4),sd=sd(logcd4))
b1<-subset(r,group01==1&age>=30)%>%
  group_by(week)%>%
  summarise(m=mean(logcd4),sd=sd(logcd4))
c<-rbind(a,a1,b,b1)
#c is the finished data frame I wanted to make that I'll import into Excel

In both examples, I have to produce something like this:

 # A tibble: 679 x 3
    week     m     sd
      
 1  0     2.71  1.05 
 2  3.57  2.71 NA    
 3  4.14  2.71 NA    
 4  4.71  1.79 NA    
 5  6.57  3.22 NA    
 6  6.86  2.30 NA    
 7  7     3.37 NA    
 8  7.29  3.76  0.560
 9  7.43  3.71  1.42 
10  7.57  1.47  1.05 
# … with 669 more rows

where the first partition is stacked from the other

I wanted to do the same thing, maybe using purrr, avoiding to create many variables a, a1, b, b1 and typing the conditions of the subset () function, like group01 == 1 or age<30 or age>= 30 several times.

If I were using a large dataset with more variables in addition to age and there was not only two treatment groups, but rather four or more, and that I had to subdivide according to sex, size, marital status, province, political affiliations and political party I also wanted it to work too, but that with dplyr is slow, tedious and inefficient, especially when the subset criteria or the dimensionality of the dataset increases.

As you can see, having an age variable makes the process much more difficult in the example2.

I'm trying to find a more efficient way to do this, especially if the cd4 dataset contained more information. I do not know how to use python.

Similar question but without reproducible example:
https://stackoverflow.com/questions/58240707/faster-way-to-make-new-variables-containing-data-frames-to-be-rbinded?noredirect=1#comment102855077_58240707

I think the difficulty of this task is related to the curse of dimensionality.
I can not change the group_by condition.

Will the size of the main Bitcoin main node be two large to run on a normal computer?

I am passionate about Bitcoin, without basic knowledge of computer science and cryptography. Once, I used Bitcoin Core on my laptop, but I realized that it was taking up too much space in my computer.

I understand the idea that the Bitcoin network is secure as long as any user can run the full Bitcoin core node. However, what worries me is that one day, the complete node becomes too large to run on a normal computer. Do Bitcoin developers have solutions to this problem? Or is it really a problem?

I would like to hear some answers from crypto-currency developers or computer specialists.