service – ubuntu 18.04: clamav running, tomcat dying

Ubuntu 18.04. 2 GB RAM + 512 MB swap.

When running clamav, it consumes more than 800MB of memory as it loads all the signatures into memory. For this reason, I have configured it to operate everyday at 3 am instead of continuing.

Until now, tomcat and clamav got along very well. At 3 am yesterday evening, the tomcat service was closed when clamav started operating.

(4643256.375812) OOM killed process 8145 (clamscan) total-vm:1149268kB, anon-rss:969476kB, file-rss:4kB
(7667218.452649) OOM killed process 8865 (java) total-vm:4568248kB, anon-rss:1067312kB, file-rss:0kB

Mar 26 03:00:31 user systemd(1): tomcat.service: Main process exited, code=killed, status=9/KILL
Mar 26 03:00:31 user systemd(1): tomcat.service: Failed with result 'signal'.
Mar 26 03:17:08 user systemd(1): Reloading The Apache HTTP Server.
Mar 26 03:17:08 user systemd(1): Reloaded The Apache HTTP Server.

I know the upgrade is an immediate answer, but until then, my questions are:

  1. Is there a way to run clamav without consuming 800 + mb?

  2. Is there a way to automatically restart Tomcat if something like this happens again?

  3. Did Java really take 4,568,248 KB = 4.5 GB or is something missing?

Windows 10 – Remove RAID-0 from two NVME drives for dual boot Win10 and Ubuntu

I currently have two Samsung EVO ~ 500 GB NVMe drives running Windows 10 in RAID-0. I would like to delete the RAID-0 configuration and use one of the disks for Ubuntu 18.04 while the other disk remains on Windows 10. I have backed up my files, so Windows 10 will be a clean install. However, I still need the same license key.

Is the following procedure correct?

  1. Download Windows 10 and Ubuntu ISO files
  2. Create two installation media drives, each with the appropriate ISO.
  3. Restart the PC and boot into the BIOS to delete the RAID configuration.
  4. Stop and unplug one of the SSDs
  5. Boot from Windows 10 USB and perform a fresh installation.
  6. Once Windows is installed; shutdown and reconnect the other SSD
  7. boot from Ubuntu USB
  8. Install on the SSD now connected

From the list above, I have a few questions:

  • From step 1; I know where to get an ISO image for Ubuntu, but I've only seen cracked versions of Windows. Is there a specific place to get legitimate Windows 10 ISO?
  • From step 3; How can i actually remove RAID configuration? Will there be an option to delete it and then should I also switch to another disk configuration, for example IDE or SATA?

Your license key will be automatically detected, once installed, Windows will already be activated. – Ramhound March 6, 17 at 9:15 p.m.

  • From step 5 and the above quote; How can I make sure my Windows 10 license is kept? How does it determine my license key with one of the original "missing" disks?

ALSO, if it is an NVMe drive, make sure you get the right RST drivers from Intel, otherwise Windows will not recognize the drives. – BillyBob March 9, 17 at 6:20 am

  • From the above quote; I'm still not sure I understand that. When should I install these RST drivers because both of my disks are NVMs? Does this happen before, after or during Windows installation?
  • From step 8; how will i now identify which of my two drives is the right one to install? Do I have to disconnect the other drive with Windows 10 before trying to install Ubuntu?

Any other advice would be greatly appreciated. This is the first time I have dealt with RAID configurations and I really want it to go well.

Thank you!

How to have a fake camera on Ubuntu without having a real physical camera?

I am writing a module that uses chrome to allow access to the camera. However, this will be deployed on computers without a real camera, so Google Chrome won't even ask for permission.

Note that using chrome settings to start is not an option. I need to install something in ubuntu which will serve as a fake camera detectable by Chrome.

Is it possible and how?

virtualisation – VMWare Shared Folder Not Working – Host Windows 10 – Guest Ubuntu 19.0 – VM Ware player version 15.5.2

I am trying to share the folder from my host operating system (Windows 10 64 bit) to the guest operating system (Ubuntu 19.0) using the shared folder options present in the virtual machine configuration option (VM Ware Player version 15.5.2 build-15785246), even after sharing is activated and restarts the VM Player and the guest OS, the changes are not visible and I do not see no shared folder in the / mnt / hgfs directory. I don't know what the problem is, I've seen several articles on this on stackexchange, but nothing helps, so post this thread, any help is appreciated, thanks!

Virtual machine configuration

Cron job to mount AWS EFS at startup – Ubuntu

I have an Ubuntu 18 version and AWS EFS storage that I am mounting as NFS. I have created a script called mount.sh and given exe permission. I am able to run the script and the EFS is mounted.
I wanted this to be done automatically when the system boots.

I created a crontab – e sudo with the command below
@reboot sleep 120 /script/mount.sh

Mount.sh has the command below
sudo mount -t nfs4 -o nfsvers = 4.1, rsize = 1048576, wsize = 1048576, hard, timeo = 600, retrans = 2, noresvport fs-xxxxxx.efs.ap-south-1.amazonaws.com:/ / efs

Can you help me mount this EFS at system boot. Very new for Ubuntu.

hard drive – Delete broken Ubuntu partition on macOS

I started by partitioning my MacBook Pro to use Ubuntu and macOS. The ubuntu partition is a bit broken so I want to delete the partition. I have tried to do this via disk utilities, but I cannot delete the ubuntu partition or change the size. I even tried in recovery mode but still can't delete the partition.

I don't see the disc in the sidebar. Only when I click on the partition button to see all the partitions.

Are there other ways to delete the partition to get a macintosh hard drive?

ubuntu – How do I "route the IP network" Docker containers to and from nodes connected to WireGuard?

According to the network configuration illustrated in the diagram below, how can we (my
team) do $ ssh 10.10.10.(1|2|3) (or one of the 10.10.10.(1|2|3)
container address) laptop1 (and laptop2) job? The same ssh
controls work when running from inside dockerhost but break it
laptop.
wg-docker-container-ip-routing-1

We would specifically like to do this job without any containers
wired rigging (although we tried this and it failed, see below)
details below) or any special docker network (which we don't have
tried again). But if we have to, we have to – we just want to know why.

However, we hope there will be some sort of ip routing command that we can do
to do this job.

Additional test scenarios, information:

Adding a wire protector (Interface) Address = 192.168.4.10 at container1
and the "route" of the corresponding wire guard dockerhost and
laptop1 always results in the same behavior (ssh of laptop1
breaks, from dockerhost works). Before it's the same before and after
Adding an "anywhere" firewall port access opening to 192.168.4.0/24 sure
dockerhost.

Therefore, our team currently suspects an IP routing problem and not a
wire-oriented restriction. But we speculate, because we are not deeply
experienced in network routing.

Try a variation of the following (from
https://superuser.com/a/756146/98033) also didn't work (by
ifconfig MacOS output: utun1 is, we think, the wireguard interface)
route add -host 10.10.10.1 -interface utun1

Attempting to variants the following answers, on the
Side dockerhost.domain1.org, has not (yet) worked:
https://unix.stackexchange.com/q/530790/36362

We are far from certain to perform the above steps correctly, because
we are not deeply experienced in IP network management.

The running version of Wireguard on macOS laptops follows. We have
not yet found a way to update it (there is no "upgrade" button
from the App Store; removal of /Applications/WireGuard.app then
running the results of installing the App Store in the same older version; all
which is quite frustrating) for the v0.0.20200127-17 claimed to
https://www.wireguard.com/install.

Current version of WireGuard macOS:

App version: 0.0.20191105 (16)
Go backend version: 0.0.20191013

As of 2020-03-19, the source for this document (including the ditaa text source for the Asciidoctor diagram) can be found here:

https://github.com/johnnyutahh/software-systems-docs/tree/master/sysadmin/networking/wireguard

ubuntu – Error during bwa loop with my files

I would like to do a bwa (this is a function that aligns the readings of a file with a reference sequence) in a loop for all my files.

This is the function for a single file:

     bwa mem /data/Adrian_227/Genomes_NCBI/NC_006262.1/NC_006262.1.fasta /data/fasta/PV001_trim.fa > /data/Adrian_227/Genomes_NCBI/NC_006262.1/NC_006262.1.sam

And I receive the sam file as output. But when I want to loop, I run the following command:

  for infile in /data/fasta/*.fa; do bwa mem $infile /data/Adrian_227/Genomes_NCBI/NC_006262.1/NC_006262.1.fasta > outfile=$infile; done;

and the console remains in standby without producing a file
What is my mistake
Thanks in advance

ubuntu – Adjust pm.start_servers or pm.min / max_spare_servers

Due to an upcoming event, we are expecting high traffic (approximately 2000 concurrent users) for several weeks on our Magento2 powered e-commerce website running on Ubuntu, NGINX and PHP7.1-fpm. Since our catalog is quite large, we have upgraded our Digitalocean droplet to the highest specifications between 192 GB of RAM and 32 virtual processors.

The site is running at lightning speed, however, once we reach over 800 users it starts to get pretty slow and after a while it will end up showing 502 bad gateway errors which we really can't afford for this temporary event.

Our most recent errors are now pm.start_servers, or pm.min / max_spare_servers, which are:

    (16-Mar-2020 21:50:35) WARNING: (pool www) seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle$
    (16-Mar-2020 21:50:36) WARNING: (pool www) seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 16 children, there are 0 idl$

We have tried several parameters, but nothing seems to work reliably. Here are some parameters:

/etc/php/7.1/fpm/pool.d/www.conf:

    pm = dynamic
    pm.start_servers = 20
    pm.min_spare_servers = 20
    pm.max_spare_servers = 50
    ;pm.process_idle_timeout = 10s;
    pm.max_requests = 0

So now we are looking for a solution or a way to calculate them exactly? Are there any other considerations or settings that we should change?

Here is our Magento nginx.conf:

    root $MAGE_ROOT/pub;
    index index.php;
    autoindex off;
    charset UTF-8;
    error_page 404 403 = /errors/404.php;
    #add_header "X-UA-Compatible" "IE=Edge";

    # PHP entry point for setup application
    location ~* ^/setup($|/) {
        root $MAGE_ROOT;
        location ~ ^/setup/index.php {
            fastcgi_pass   fastcgi_backend;

            fastcgi_param  PHP_FLAG  "session.auto_start=off n suhosin.session.cryptua=off";
            fastcgi_param  PHP_VALUE "memory_limit=2048M n max_execution_time=18000";
            fastcgi_read_timeout 600s;
            fastcgi_connect_timeout 600s;
        fastcgi_buffers 256 16k;
        fastcgi_max_temp_file_size 0;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include        fastcgi_params;
        }

        location ~ ^/setup/(?!pub/). {
            deny all;
        }

        location ~ ^/setup/pub/ {
            add_header X-Frame-Options "SAMEORIGIN";
        }
    }

    # PHP entry point for update application
    location ~* ^/update($|/) {
        root $MAGE_ROOT;

        location ~ ^/update/index.php {
            fastcgi_split_path_info ^(/update/index.php)(/.+)$;
            fastcgi_pass   fastcgi_backend;
            fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            fastcgi_param  PATH_INFO        $fastcgi_path_info;
            include        fastcgi_params;
        }

        # Deny everything but index.php
        location ~ ^/update/(?!pub/). {
            deny all;
        }

        location ~ ^/update/pub/ {
            add_header X-Frame-Options "SAMEORIGIN";
        }
    }

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location /pub/ {
        location ~ ^/pub/media/(downloadable|customer|import|theme_customization/.*.xml) {
            deny all;
        }
        alias $MAGE_ROOT/pub/;
        add_header X-Frame-Options "SAMEORIGIN";
    }

    location /static/ {
        # Uncomment the following line in production mode
        # expires max;

        # Remove signature of the static files that is used to overcome the browser cache
        location ~ ^/static/version {
            rewrite ^/static/(version(^/)+/)?(.*)$ /static/$2 last;
        }

        location ~* .(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2)$ {
           # add_header Cache-Control "public";
            add_header X-Frame-Options "SAMEORIGIN";
            expires +1y;

            if (!-f $request_filename) {
                rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
            }
        }
        location ~* .(zip|gz|gzip|bz2|csv|xml)$ {
           # add_header Cache-Control "no-store";
            add_header X-Frame-Options "SAMEORIGIN";
            expires    off;

            if (!-f $request_filename) {
               rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
            }
        }
        if (!-f $request_filename) {
            rewrite ^/static/?(.*)$ /static.php?resource=$1 last;
        }
        add_header X-Frame-Options "SAMEORIGIN";
    }

    location /media/ {
        try_files $uri $uri/ /get.php$is_args$args;

        location ~ ^/media/theme_customization/.*.xml {
            deny all;
        }

        location ~* .(ico|jpg|jpeg|png|gif|svg|js|css|swf|eot|ttf|otf|woff|woff2)$ {
           # add_header Cache-Control "public";
            add_header X-Frame-Options "SAMEORIGIN";
            expires +1y;
            try_files $uri $uri/ /get.php$is_args$args;
        }
        location ~* .(zip|gz|gzip|bz2|csv|xml)$ {
           # add_header Cache-Control "no-store";
            add_header X-Frame-Options "SAMEORIGIN";
            expires    off;
            try_files $uri $uri/ /get.php$is_args$args;
        }
        add_header X-Frame-Options "SAMEORIGIN";
    }

    location /media/customer/ {
        deny all;
    }

    location /media/downloadable/ {
        deny all;
    }

    location /media/import/ {
        deny all;
    }


    location /Preread/ {

            #add_header Cache-Control "no-store";
            #add_header X-Frame-Options "SAMEORIGIN";

            root /var/www/html/;
        #try_files $uri $uri/ /Preread/index.php?$args;
        #try_files $uri $uri/ /Preread/index.php?q=$uri&$args;
            #index index.php index.html index.htm;
        #try_files $uri $uri/ /index.php?q=$uri&$args;
            try_files $uri $uri/ /index.php?q=$uri&$args;
        #try_files $uri $uri/ /get.php$is_args$args;
        #allow all;
    #   try_files $uri $uri/ /get.php$is_args$args;
    }


    # PHP entry point for main application
    location ~ (index|get|static|report|404|503|cs|davidfile|health_check).php$ {
        try_files $uri =404;
        fastcgi_pass   fastcgi_backend;
        fastcgi_buffers 1024 4k;

        fastcgi_param  PHP_FLAG  "session.auto_start=off n suhosin.session.cryptua=off";
        fastcgi_param  PHP_VALUE "memory_limit=1048M n max_execution_time=18000";
        fastcgi_read_timeout 600s;
        fastcgi_connect_timeout 600s;

        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }

    gzip on;
    gzip_disable "msie6";

    gzip_comp_level 6;
    gzip_min_length 1100;
    gzip_buffers 16 8k;
    gzip_proxied any;
    gzip_types
        text/plain
        text/css
        text/js
        text/xml
        text/javascript
        application/javascript
        application/x-javascript
        application/json
        application/xml
        application/xml+rss
        image/svg+xml;
    gzip_vary on;

    # Banned locations (only reached if the earlier PHP entry point regexes don't match)
    location ~* (.php$|.htaccess$|.git) {
        deny all;
    }

Expert help would be greatly appreciated, thanks

ZFS: the second vdev of the zfs pool fails after restarting on Ubuntu

I'm a bit lost in what exactly happened and how to do it with a recently extended zfs configuration on Ubuntu 18.04.

I have had a storage server that has worked well for years using ZFS with 2 pools each containing 10+ disks. Everything was fine until …. we decided to expand a pool by adding a new 10 disc vdev. After plugging in, everything worked fine. Here's what I did to add the peripherals (what I now know I should have done on disk-by-id :-():

~$ sudo modprobe zfs
~$ dmesg|grep ZFS
(   17.948569) ZFS: Loaded module v0.6.5.6-0ubuntu26, ZFS pool version 5000, ZFS filesystem version 5
~$ lsscsi
(0:0:0:0)    disk    HGST     HUS724020ALS640  A1C4  /dev/sda
(0:0:1:0)    disk    HGST     HUS724020ALS640  A1C4  /dev/sdb
(0:0:2:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdc
(0:0:3:0)    enclosu LSI      SAS2X28          0e12  -
(1:0:0:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdd
(1:0:1:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sde
(1:0:2:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdf
(1:0:3:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdg
(1:0:4:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdh
(1:0:5:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdi
(1:0:6:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdj
(1:0:7:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdk
(1:0:8:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdl
(1:0:9:0)    disk    HGST     HUS726040AL5210  A7J0  /dev/sdm
(1:0:10:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdn
(1:0:11:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdo
(1:0:12:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdp
(1:0:13:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdq
(1:0:14:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdr
(1:0:15:0)   disk    HGST     HUS726060AL5210  A519  /dev/sds
(1:0:16:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdt
(1:0:17:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdu
(1:0:18:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdv
(1:0:19:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdw
(1:0:20:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdx
(1:0:21:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdy
(1:0:22:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdz
(1:0:23:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdaa
(1:0:24:0)   enclosu LSI CORP SAS2X36          0717  -
(1:0:25:0)   disk    HGST     HUS726040AL5210  A7J0  /dev/sdab
(1:0:26:0)   enclosu LSI CORP SAS2X36          0717  -
(1:0:27:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdac      ===>from here below the new plugged disks
(1:0:28:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdad
(1:0:30:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdae
(1:0:31:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdaf
(1:0:32:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdag
(1:0:33:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdah
(1:0:34:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdai
(1:0:35:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdaj
(1:0:36:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdak
(1:0:37:0)   disk    HGST     HUH721010AL4200  A384  /dev/sdal

Next, I added the players as a new raidz2 videv to the existing archive pool. Seems to work properly after:

~$ sudo zpool add -f archive raidz2 sdac sdad sdae sdaf sdag sdah sdai sdaj sdak sdal
~$ sudo zpool status
  pool: archive
state: ONLINE
  scan: scrub repaired 0 in 17h18m with 0 errors on Sun Dec  8 17:42:17 2019
config:
        NAME                        STATE     READ WRITE CKSUM
        archive                     ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000cca24311c340  ONLINE       0     0     0
            scsi-35000cca24311ecbc  ONLINE       0     0     0
            scsi-35000cca24d019248  ONLINE       0     0     0
            scsi-35000cca24311e30c  ONLINE       0     0     0
            scsi-35000cca243113ab0  ONLINE       0     0     0
            scsi-35000cca24311c188  ONLINE       0     0     0
            scsi-35000cca24311e7c8  ONLINE       0     0     0
            scsi-35000cca24311e3f0  ONLINE       0     0     0
            scsi-35000cca24311e7bc  ONLINE       0     0     0
            scsi-35000cca24311e40c  ONLINE       0     0     0
            scsi-35000cca243118054  ONLINE       0     0     0
            scsi-35000cca243115cb8  ONLINE       0     0     0
          raidz2-1                  ONLINE       0     0     0
            sdac                    ONLINE       0     0     0
            sdad                    ONLINE       0     0     0
            sdae                    ONLINE       0     0     0
            sdaf                    ONLINE       0     0     0
            sdag                    ONLINE       0     0     0
            sdah                    ONLINE       0     0     0
            sdai                    ONLINE       0     0     0
            sdaj                    ONLINE       0     0     0
            sdak                    ONLINE       0     0     0
            sdal                    ONLINE       0     0     0

errors: No known data errors

  pool: scratch
state: ONLINE
  scan: scrub repaired 0 in 9h8m with 0 errors on Sun Dec  8 09:32:15 2019
config:
        NAME                        STATE     READ WRITE CKSUM
        scratch                     ONLINE       0     0     0
          raidz2-0                  ONLINE       0     0     0
            scsi-35000cca24311e2e8  ONLINE       0     0     0
            scsi-35000cca24311e858  ONLINE       0     0     0
            scsi-35000cca24311ea5c  ONLINE       0     0     0
            scsi-35000cca24311c344  ONLINE       0     0     0
            scsi-35000cca24311e7ec  ONLINE       0     0     0
            scsi-35000cca24311bcb8  ONLINE       0     0     0
            scsi-35000cca24311e8a8  ONLINE       0     0     0
            scsi-35000cca2440b4f98  ONLINE       0     0     0
            scsi-35000cca24311e8f0  ONLINE       0     0     0
            scsi-35000cca2440b4ff0  ONLINE       0     0     0
            scsi-35000cca243113e30  ONLINE       0     0     0
            scsi-35000cca24311e9b4  ONLINE       0     0     0
            scsi-35000cca243137e80  ONLINE       0     0     0

errors: No known data errors

However, a reboot probably spoiled the order of the disk drives (drive assignment; not sure, but seems very likely). At least, that is what I can do with it so far after reading many documents and numbers.
The current state is as follows. The scratch pool works well. The archive pool is not:

~$ sudo zpool status -v
  pool: archive
state: UNAVAIL
status: One or more devices could not be used because the label is missing
or invalid.  There are insufficient replicas for the pool to continue
functioning.
action: Destroy and re-create the pool from
a backup source.
  see: http://zfsonlinux.org/msg/ZFS-8000-5E
  scan: none requested
config:

NAME                        STATE    READ WRITE CKSUM
archive                    UNAVAIL      0    0    0  insufficient replicas
  raidz2-0                  ONLINE      0    0    0
    scsi-35000cca24311c340  ONLINE      0    0    0
    scsi-35000cca24311ecbc  ONLINE      0    0    0
    scsi-35000cca24d019248  ONLINE      0    0    0
    scsi-35000cca24311e30c  ONLINE      0    0    0
    scsi-35000cca243113ab0  ONLINE      0    0    0
    scsi-35000cca24311c188  ONLINE      0    0    0
    scsi-35000cca24311e7c8  ONLINE      0    0    0
    scsi-35000cca24311e3f0  ONLINE      0    0    0
    scsi-35000cca24311e7bc  ONLINE      0    0    0
    scsi-35000cca24311e40c  ONLINE      0    0    0
    scsi-35000cca243118054  ONLINE      0    0    0
    scsi-35000cca243115cb8  ONLINE      0    0    0
  raidz2-1                  UNAVAIL      0    0    0  insufficient replicas
    sdac                    FAULTED      0    0    0  corrupted data
    sdad                    FAULTED      0    0    0  corrupted data
    sdae                    FAULTED      0    0    0  corrupted data
    sdaf                    FAULTED      0    0    0  corrupted data
    sdag                    FAULTED      0    0    0  corrupted data
    sdah                    FAULTED      0    0    0  corrupted data
    sdai                    FAULTED      0    0    0  corrupted data
    sdaj                    FAULTED      0    0    0  corrupted data
    sdak                    FAULTED      0    0    0  corrupted data
    sdal                    FAULTED      0    0    0  corrupted data

  pool: scratch
state: ONLINE
  scan: scrub repaired 0 in 16h36m with 0 errors on Sun Feb  9 17:00:25 2020
config:

NAME                        STATE    READ WRITE CKSUM
scratch                    ONLINE      0    0    0
  raidz2-0                  ONLINE      0    0    0
    scsi-35000cca24311e2e8  ONLINE      0    0    0
    scsi-35000cca24311e858  ONLINE      0    0    0
    scsi-35000cca24311ea5c  ONLINE      0    0    0
    scsi-35000cca24311c344  ONLINE      0    0    0
    scsi-35000cca24311e7ec  ONLINE      0    0    0
    scsi-35000cca24311bcb8  ONLINE      0    0    0
    scsi-35000cca24311e8a8  ONLINE      0    0    0
    scsi-35000cca2440b4f98  ONLINE      0    0    0
    scsi-35000cca24311e8f0  ONLINE      0    0    0
    scsi-35000cca2440b4ff0  ONLINE      0    0    0
    scsi-35000cca243113e30  ONLINE      0    0    0
    scsi-35000cca24311e9b4  ONLINE      0    0    0
    scsi-35000cca243137e80  ONLINE      0    0    0

errors: No known data errors

I have tried the zpool export archive (also with -f) but it complains of a missing device.

~$ sudo zpool export -f archive
cannot export 'archive': one or more devices is currently unavailable

Obviously the import also fails ….

What else to try? I just can't believe that a "simple" reorganization of the disk has spoiled all of the data in the archive pool.