mysql 8.0 – innodb The page offset does not match the file offset: Page offset: 262144,

Our MySQL8 database crashed due to lack of free space in the data directory. After we organized more space, the database kept crashing with the error

I know that there are a lot of things with this kind of problem. Believe me, I've tried everything, otherwise I would not have opened this one.

I was able to start it only with recovery mode 6, but after that, it still does not start without the recovery mode. After searching the internet, I tried many solutions but nothing worked.

At this point, I have stopped trying to recover the database and I have created a new database with the same file my.cnf in the hope of. import data from IBD files. I only changed the data directory from my.cnf file. The data directory of the blocked database was "/ var / lib / mysql / sas /" the new database has "/ var / lib / mysql / sas / data / data /", which allows me to move immediately the IBD files. I can not empty tables and import them into the new database because they are too big and too many. It would take at least a month.

What I did was;

create a table on the new database delete table space and move the IBD file to the new database data directory import table space

The page offset does not match the file offset: page offset: 262144, file offset: 262144

I can select the same table in the open overwritten database with recovery mode 6. So I do not think the IBD file is corrupted. Here I came out of ideas.

I am totally desperate at this point and any help would be welcome. Thank you

scan – Is saving the attachment of an Outlook e-mail to a disk to allow reading of the contents of the file is secure?

I'm trying to make sure that e-mail attachments are "what they say they are".

I want to use the .SaveAsFile(string path) on the mail item attachment object and save to disk, then run exiftool.exe which will read the metadata and tell me if the attached file type is really what the extension says that he is

My concern is that I am not sure if there is a risk associated with this? As far as I know, I do not run the file just by reading its metadata. Therefore, no malicious file should be executed, right?

I'm using this wrapper to read the file from C #
https://github.com/AerisG222/NExifTool

I've got bitdefender installed on the machine and I know I've tried backing up a test virus file. This one immediately recovered and deleted.

I took a look at redemption and it offers an option to SaveAsStream which would allow to use in memory instead of saving on disk but rather to save on disk if it presents no risk

novice – Molecular .pdb file reader in rust

I am new to rust and wanted to start with a medium sized project to help learn. I decided to create a basic code of quantum mechanics that starts with this simple file reader. It takes a pdb file that is just a list of atom coordinates with additional information. I've included a pdb database for the tests below, the file location string needs to be changed.

I'm trying to use the rust style as much as possible. Any suggestions for improving the code and style, no matter how small, are welcome, thanks.

methanol.pdb:

COMPND    methanol
HETATM    1   C  UNL     1      -0.372   0.002   0.003  1.00  0.00           C
HETATM    2   O  UNL     1       0.898  -0.575  -0.119  1.00  0.00           O
HETATM    3   H  UNL     1      -0.674  -0.119   1.051  1.00  0.00           H
HETATM    4   H  UNL     1      -0.314   1.066  -0.315  1.00  0.00           H
HETATM    5   H  UNL     1      -1.083  -0.525  -0.643  1.00  0.00           H
HETATM    6   H  UNL     1       1.545   0.150   0.024  1.00  0.00           H
CONECT    1    2    3    4    5
CONECT    2    6
END

file_readers.rs:

use std::fs;

#(derive(Debug))
struct Atom {
    /// May include atomic charges, masses etc in future
    atomic_symbol: String,
    x_coord: f64,
    y_coord: f64,
    z_coord: f64
}

pub fn read_pdb() {
    /// Iterates over lines in any pdb file to produce vec of Atom structs
    /// Will be expanded in future to build graph of molecule as well

    let data = fs::read_to_string(
        "/home//methanol.pdb")
        .expect("Unable to read file");

    // vec of strings where each element in vec is a line in the file
    let lines: Vec<&str> = data.lines().collect();
    let name: &str = &lines(0)(10..);
    let mut molecule = Vec::new();

    // loop over each element in the vec, starting from the 1st (not 0th)
    for line in lines(1..).iter() {
        if &line(..3) == "HET" {
            // atomic_symbol may be 1/2 chars so it's defined over a range not a specific index
            let atom = Atom {
                atomic_symbol: line(13..15).trim().parse::().unwrap(),
                // trim removes whitespace when there isn't a minus sign
                x_coord: line(32..38).trim().parse::().unwrap(),
                y_coord: line(40..46).trim().parse::().unwrap(),
                z_coord: line(48..54).trim().parse::().unwrap(),
        };
        molecule.push(atom)
        } else {
            break
        }
    }

    println!("{:?}n{:?}", molecule, name);
}

Please be aware of very specific ranges for extracting the coords and such are taken from the official pdb format guide. From time to time, the coordinates may overlap if you use high precision, eg. -0.56789-0.12345 slicing is usually used instead of splitting into spaces.

8 – Search for file names with the SEARCH API

I want to create a search API view that can search for content and files based on keywords. I can search for content without problems, but have trouble finding it in file names. This is a Drupal 8 site.

I use the built-in database.

I have created a database and an index. This index includes the file name and the mime type. Both are defined as type "FullText". The index has been updated.

I've created a streamlined view using this index that only displays filenames and exposes a filter that only looks for the fulltext file name and filters out anything that is not a PDF. If I search without parameters, I see all the PDF files so that the index can see them. However, when I try to filter a real file name, it can not find exact matches.

For example; I have a pdf called: "Debts.pdf". Even with the lowering, the conversion and the elimination of the capitalization, no variation of the word "debt" gives any result.

If I search the full name of the file name as "Debts.pdf" filter, it's its own filter that works, but not the FULLTEXT search for the file name.

Is there another step required to search for file names with the search API? I was trying to avoid installing any other software to be able to look inside files and look only for file names. I had thought it would be easier …

php – How to make the IF condition recognize values ​​greater than 100 and / or how to browse a table obtained from a csv file and get the desired value?

Good night!

Gentlemen, I'm new to PHP and have the following problem.
I have two options below to solve my problems. However, I can not make them work completely.


FIRST OPTION:

This condition of the IF mine does not calculate the freight value if the distance exceeds 1000 km.

The distance is calculated correctly using the Google Distance Matrix API.

Follows the SI section:

if ($distance > 0 && $distance <= 100)
    {
        $frete = (100 * 2.19);
    }elseif ($distance > 100 && $distance <= 200)
    {
        $frete = (100 * 1.35);
    }elseif ($distance > 200 && $distance <= 300)
    {
        $frete = (100 * 1.18);
    }elseif ($distance > 300 && $distance <= 400)
    {
        $frete = (100 * 1.11);
    }elseif ($distance > 400 && $distance <= 500)
    {
        $frete = (100 * 1.07);
    }elseif ($distance > 500 && $distance <= 600)
    {
        $frete = (100 * 1.04);
    }elseif ($distance > 600 && $distance <= 700)
    {
        $frete = (100 * 1.02);
    }elseif ($distance > 700 && $distance <= 800)
    {
        $frete = (100 * 1.01);
    }elseif ($distance > 800 && $distance <= 900)
    {
        $frete = (100 * 1.00);
    }elseif ($distance > 900 && $distance <= 1000)
    {
        $frete = (100 * 0.99);
    }elseif ($distance > 1000 && $distance <= 1100)
    {
        $frete = (100 * 0.99);
    }elseif ($distance > 1100 && $distance <= 1200)
    {
        $frete = (100 * 0.98);
    }
    elseif ($distance > 1200 && $distance <= 1300)
    {
        $frete = (100 * 0.98);
    }elseif ($distance > 1300 && $distance <= 1400)
    {
        $frete = (100 * 0.97);
    }elseif ($distance > 1400 && $distance <= 1500)
    {
        $frete = (100 * 0.97);
    }elseif ($distance > 1500 && $distance <= 1600)
    {
        $frete = (100 * 0.97);
    }
    elseif ($distance > 1600 && $distance <= 1700)
    {
        $frete = (100 * 0.97);
    }elseif ($distance > 1700 && $distance <= 1800)
    {
        $frete = (100 * 0.97);
    }elseif ($distance > 1800 && $distance <= 1900)
    {
        $frete = (100 * 0.97);
    }elseif ($distance > 1900 && $distance <= 2000)
    {
        $frete = (100 * 0.97);
    }
    elseif ($distance > 3000){
        $distance = 3000;
        $frete = (100 * 0.64);
    }

    echo $distance;
    echo "
"; echo $frete;

Can someone help me with what I am going wrong?


SECOND OPTION:

I've tried doing this via an external CSV file. However, I can not scan this file to see if this distance is in an array.

Here is the code below my first try. I would even prefer this option to have a cleaner code and easier maintenance when prices change.

 foreach( $csv->ler() as $linha ) //AQUI TRAGO AS INFORMAÇÕES DO ARQUIVO CSV.{
    if ($distance >= $linha(0) && $distance <= $linha(1)){
            for ($i = 0; $i < count($linha); $i++){
                if ($distance != $linha($i)(0) && $distance != $linha($i)(1)) {
                    echo floatval($distance);
                    echo $linha;                
                }
            }
        }
        var_dump( $linha );//TRÁS TODOS OS DADOS ARRAYS CORRETAMENTE.
    }

Example VARDUMP:

array (size=3) 
  0 => string '1' (length=1)
  1 => string '100' (length=3)
  2 => string '2.19' (length=4)

If anyone can help me with one or the other option, I will be very grateful to you.

Thank you in advance for your attention.

bash – adding data from a file to a template in a second file

I frequently download music performances on YouTube where the performance is a unique music file and I write a .cue file to split the performance into individual pieces.
Because I do it frequently, I've started thinking about how to make it easier for myself.
Most of the time, these performances have the songs listed with the start time in the video comments.
After downloading the video in MP3 format, I usually start by writing a .cue file to decompose the MP3 file into respective tracks.
This time, I just copied the list of titles into two files, one indicating the start time of each track and the other the title of each track.
I would like to add each line of the data file start time after each occurrence the pattern "INDEX 01" in my .cue file.

Nginx – [warn] An upstream response is buffered in a temporary file / var / cache / nginx / proxy_temp / 7/08/0000000087

Nginx is a reverse proxy for my website. This website sometimes has malfunctions that can lead to error pages.

I restarted my application as follows:
systemctl restart myapplication.service
and it did not help.

Then I restarted the SQL service like that
systemctl restart mssql-server
and it did not help.

Then I restarted Nginx service as such
systemctl restart nginx
and vualla the site is working again.

I therefore consulted the nginx error log and tried to understand the problem. I have seen thousands of lines of these warnings:

[warn] 31507 # 31507: * 4839 an upstream response is buffered to a
temporary file / var / cache / nginx / proxy_temp / 2/20/0000000202 as long as
reading upstream

This is the configuration in my default.conf file:

server {
    #listen        80;
    listen                 *:443 ssl;
    ssl_certificate        /etc/ssl/example.pem;
    ssl_certificate_key    /etc/ssl/example.key;
    server_name            example.com *.example.com;
    location / {
        proxy_pass         https://localhost:7001/;
        proxy_http_version 1.1;
        proxy_set_header   Upgrade $http_upgrade;
        proxy_set_header   Connection keep-alive;
        proxy_set_header   Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header   X-Forwarded-Proto $scheme;
    }
}

I've read the answer given here that suggests defining the proxy_max_temp_file_size at 0 but that makes no sense to me because if the buffer is full, it writes to a temporary file, so why limit it to 0?

Another option I consider is to increase the buffer size from the default values, but I am not if I should.
It says:

Syntax: proxy_buffer_size size;
Default value: proxy_buffer_size 4k | 8k;
Context: http, server, location

Sets the size of the buffer used for
reading the first part of the response received from the representative
server. This part usually contains a small response header. By
By default, the size of the buffer is equal to one page of memory. That's either
4K or 8K, depending on the platform. It can however be reduced.

What should I do?

document library – Should we migrate 3 TB of on-premises file server to the cloud?

We have a 3TB file server that we plan to migrate to the Sharepoint cloud in 13 different document libraries.
Among the data of 3 TB:
50% of the files are Photoshop and Coreldraw.
The remaining 50% are Word, Excel, PPT, PDF and images.

After implementation – My data will be shared between 50 users in my office with the help of OneDrive Sync client.

Should I consider going ahead or not?
All tools recommended for this migration.

thank you,
Nikunj Dalal

Where can I find the location of the main sidebar file?

enter the description of the image here

I'm trying to get the location of the sidebar list that I shared with the image. Where can I get this?

File System – How can I get content from a blog since SFTP?

I've inherited a site that has been compromised (I'm not sure of the wordpress version, the crash happened last summer). I have CLI access and I can use SFTP on the server. The only thing I want to get is the content of the blogs. I was able to see the SFTP mysql files, but they are only .frm files. What is the best way to get this content?