aklog: a pioctl failed when defining tokens (18.04 LTS; the daemon afs seems to be running)

I have become unable to use AFS in recent days; Aklog is now giving up

aklog: a pioctl failed while setting tokens for cell (...)

The standard advice seems to be to make sure the daemon is running, but that does not seem to be the problem. Here is a summary of my service start (for good measure, it seemed to work anyway) and checking its status before encountering the problem again.

me@mine:~$ /etc/init.d/openafs-client start
( ok ) Starting openafs-client (via systemctl): openafs-client.service.
mine:~$ sudo service openafs-client status
(sudo) password for me: 
● openafs-client.service - OpenAFS client
   Loaded: loaded (/lib/systemd/system/openafs-client.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2019-08-22 11:59:48 BST; 31min ago
  Process: 3242 ExecStopPost=/sbin/rmmod $KMOD (code=exited, status=0/SUCCESS)
  Process: 3241 ExecStop=/bin/umount -af -t afs (code=exited, status=0/SUCCESS)
  Process: 3240 ExecStop=/bin/umount -a -t afs (code=exited, status=0/SUCCESS)
  Process: 3236 ExecStop=/usr/share/openafs/openafs-client-postcheck (code=exited, status=0/SUCCESS)
  Process: 3235 ExecStop=/bin/grep -qv ^1$ /proc/sys/kernel/modules_disabled (code=exited, status=0/SUCCESS)
  Process: 3306 ExecStartPost=/usr/bin/fs sysname $AFS_SYSNAME (code=exited, status=0/SUCCESS)
  Process: 3305 ExecStartPost=/usr/bin/fs setcrypt $AFS_SETCRYPT (code=exited, status=0/SUCCESS)
  Process: 3304 ExecStart=/sbin/afsd $AFSD_ARGS (code=exited, status=0/SUCCESS)
  Process: 3294 ExecStartPre=/usr/share/openafs/openafs-client-precheck (code=exited, status=0/SUCCESS)

Aug 22 11:59:48 mine afsd(3304):        -fakestat-all             Enable fakestat support for all mounts
Aug 22 11:59:48 mine afsd(3304):        -nomount                  Do not mount AFS
Aug 22 11:59:48 mine afsd(3304):        -backuptree               Prefer backup volumes for mountpoints in backup volumes
Aug 22 11:59:48 mine afsd(3304):        -rxbind                   Bind the Rx socket (one interface only)
Aug 22 11:59:48 mine afsd(3304):        -settime                  set the time
Aug 22 11:59:48 mine afsd(3304):        -disable-dynamic-vcaches  disable stat/vcache cache growing as needed
Aug 22 11:59:48 mine afsd(3304):        -dynroot-sparse           Enable dynroot support with minimal cell list
Aug 22 11:59:48 mine fs(3305): Usage: /usr/bin/fs setcrypt -crypt  (-help)
Aug 22 11:59:48 mine fs(3306): Usage: /usr/bin/fs sysname (-newsys +) (-help)
Aug 22 11:59:48 mine systemd(1): Started OpenAFS client.
mine:~$ aklog
aklog: a pioctl failed while setting tokens for my.cell

interpreters – Access heap objects while JavaScript is running

I want to create a program that will get input JavaScript code and interpret it line by line by displaying the heap objects at each step. Something like the heap snapshot of Chrome DevTools. I will use the V8 as an interpreter, but I have no idea how this could be implemented. And I also want to be able to trigger the GC on demand.

bitcoind – Configuring and running multiple C-lightning nodes on the same machine

You can run lightningd --help and get a list of arguments that you can use to get started lightning from the exit there is:

--lightning-dir=                Set working directory. All other files are
                                     relative to this
                                      (default: "/home/user/.lightning")

it means that you can define your own Lightning-Dir by calling lightningd --lightning-dir=/some/path/to/some/directory

now you can either put all the configuration values ​​as additional arguments to your call, or you put a configuration file in /some/path/to/some/directory/ who puts everything. the default is to call config but you can also use the command line argument --conf=/path/to/some/conffile of lightningd to set another one in a different location.

One thing to remember is that both Lightning nodes must be running on different TCP / IP ports. you do this by setting the port in your account address in the configuration file announce-addr=IP-ADDR:PORT you must replace IP-ADDR with your address and the port with the port.

obviously, when interacting with each of the nodes, you have to say lightning-cli which. still there --help help command (:

lightning-cli --help
Usage: lightning-cli  (...)
--lightning-dir=  Set working directory. All other files are relative to this (default: "/home/user/.lightning")

it means that you can for example do lightning-cli --lightning-dir=/some/path/to/somedir/ getinfo

How to find vbscripts running on Windows servers

This is a question for a Managed Service Provider managing thousands of Windows 2012-2016 servers. The Microsoft hotfix of August 2019 is supposed to break many VB scripts (all?).

I have the ability to run wmi and / or ansible orders against these machines or send night night guys on these machines with a remote office.

Is there a way to know if VB scripts are running on a given machine (as part of disparate applications services). It has been suggested to simply look for * .vba on the file system, but I think this could lead to false positives and false negatives.

terminal – Running FFmpeg in Automator / AppleScript

I have previously written an Apple script to automate a task that I often do in my work.

With Apple's update to Catalina, I'm going to lose the use of Quicktime 7 (which is part of my image sequencing workflow). I want to take this opportunity to rewrite my script.

I do it step by step and the first one runs under FFmpeg.

I wrote a script:

ffmpeg -r 25 -f image2 -pattern_type glob -i '*.JPG' -codec:v prores_ks -profile:v 0 imagemagick_TL_Test_01.mov

It works in Terminal if I navigate in the folder and run it. Awesome.

I now want to find a way to make the action drag and drop.

I have tried to adapt my old code to include it in the appropriate section in order to run the ffpmeg on the deleted folder, but I have encountered error after error. Ideally, rename the output file with the name of the grandparents folder and save it to the parent folder.

on open dd

    repeat with d in dd
        set d to d's contents
        tell application "Finder"
            set seq1 to (d's file 1 as alias)
            set dparent to d's container as alias
            set mov to "" & dparent & (dparent's name) & ".mov"
        end tell
        do shell script "d=" & d's POSIX path's quoted form & "
    /opt/local/bin/ffmpeg -r 25 -f image2 -i "" & seq1 & "" '*.JPG' -codec:v prores_ks -profile:v 0 "" & dparent & ".mov" && exit
    end repeat
end open

This gives me the error:

(image2 @ 0x7ff9cd000000) Could find no file with path '***:Users:***:Desktop:imagemagick_TL_Test:01:_DAN7741.JPG' and index in the range 0-4
***:Users:***:Desktop:imagemagick_TL_Test:01:_DAN7741.JPG: No such file or directory

All pointers would be greatly appreciated! Thank you!

certificate – Installing SSL on Windows running IIS 10

I have a VPS under Windows 2016 and IIS 10. I use this server for a C # web application on port 80 and a WordPress website with possibility of purchasing on port 433. J & # 39; I purchased a SSL certificate that came with four .crt files (.crt domain, AddTrustExternalCARoot.crt, USERTrustRSAAddTrustCA.crt, and USERTrustRSADomainValidationSecureServerCA.crt) to allow https access to the WordPress site. I was able to install the domain.crt file via IIS Manager, but none of the other .crt files. I could not bind the domain to port 443 because the IIS manager said that it was already in use.

I would like to allow users to access the WordPress site through http or https. How can I do this?

SQL error while running the Magento 2 command-line upgrade

When I run the following command:
php bin/magento s:up;

It throws the following error:

SQLSTATE [42S02]: The table or basic view was not found: 1146 The table admin_magen34.mg7a_sm_products_filter & # 39; does not exist, the query was: SHOW CREATE TABLE mg7a_sm_products_filter
Can any one help me with this problem?

SQL Server: I have to estimate the running time of a SELECT .. INTO ..

Making a backup of a large table (with SELECT .. INTO ..) took me almost 4 hours on a machine with 4 CPUs and 16 GB of RAM. No external application / process was accessing the table during the operation.

This was a test environment and it is on this basis that I need to develop an estimate of the execution time for the same operation in the production environment. The production environment has 40 processors and 64 RAM.

The CPUs are identical and the I / O systems are identical for both systems. (that is, the type of disc and the arrangement of the discs are the same).

Is it realistic to think that the production of SELECT .. INTO .. ​​will be complete in less than an hour, since the production server has more than 10 processing power processors?

If it is not possible to answer this question based on the above, should I rerun the test and collect some metrics? If so, what should these parameters be?

Thank you in advance for sharing your thoughts!

c ++ 11 – Base similar to C ++ Sql running on CSV files

I implemented a very constrained trivial implementation of SQL queries running on user files (in CSV format). If possible, I tried using the modern features of C ++. The goal is to serve MySql requests so that the user does not know that no MySql server is installed. Simply – make fun of mysql using the file system.

I've divided a whole concept into 3 classes:

Code below:


void DebugPrintVector(std::string vName, std::vector & v);
void RemoveCharsFromStr(std::string &s, char c);


CsvSql::CsvSql (){};

void CsvSql::Connect(std::string host,  std::string user, std::string password, std::string  dataBase)
    std::ifstream dbFile;
    dbFile.exceptions(std::ifstream::failbit | std::ifstream::badbit);
    try {
        dbFile.open(dataBase.c_str(), std::fstream::in | std::fstream::out | std::fstream::app);
    } catch(const std::ifstream::failure& e) {
        std::cerr << "Error: " << e.what();

    if ( dbFile.peek() == std::ifstream::traits_type::eof() ) {
        throw std::runtime_error("Could not open file");

    std::vector tables;
    std::string table;
    while(!dbFile.eof() &&(dbFile >> table)) {

    std::cout<<"database in use: " << dataBase< tokens = querry.GetTokens();
    DebugPrintVector("tokens", tokens);

    if (*tokens.begin()== "SELECT") {   //select querry should derive from base "QUERRY" class
        std::vector::iterator iter = (std::find(tokens.begin(), tokens.end(), std::string("FROM")) );

        if (iter != tokens.end()) {
            std::vector columnsQuerry (++tokens.begin(), iter) ;
            DebugPrintVector("columns in querry:", columnsQuerry);
            std::string tableInUse = *++iter;

            Table table(tableInUse);
            return   table.GetSelectedColumnsContent(columnsQuerry);
    } else if (*tokens.begin()== "SELECT") {
    return std::string(" ");


Querry::Querry(std::string querrry) : _querryData {querrry} {};

std::vector  Querry::GetTokens()
    std::stringstream querryStream ( _querryData);
    std::string token;
    std::vector tokens;
    while(getline(querryStream, token, ' ')) {
        RemoveCharsFromStr(token, ',');
    return tokens;


Table::Table (std::string tableName)

std::vector Table::GetColumnsNames(void)
    std::string header;
    getline(_tableFile, header);
    std::stringstream headerStream(header);
    std::string column;
    std::vector columns;
    while(getline(headerStream, column, ',')) {
        RemoveCharsFromStr(column, ' ');
    return columns;

std::vectorTable::GetSelectedColumnsNumbers(std::vector tableColumns, std::vector querredColumns)
    std::vector clmnsNb;
    for (int i =0 ; i < querredColumns.size(); i++) {
        for (int j =0 ; j < tableColumns.size(); j++) {
            if (tableColumns(j) == querredColumns(i)) {
    return clmnsNb;


    std::string Table::GetFieldsFromSelectedColumnsNumbers(std::vector clmnsNb)
        std::string querredFields;
        std::string line;
        while( getline(_tableFile, line) ) {
            int i = 0;
            std::stringstream ss(line);
            std::string field;
            while( getline(ss, field, ',')) {
                if ( (std::find(clmnsNb.begin(), clmnsNb.end(), i) !=clmnsNb.end())) {
                    RemoveCharsFromStr(field, ' ');
                    querredFields += field + " ";
            querredFields +=  "n";

        return querredFields;

std::string Table::GetSelectedColumnsContent(std::vector selectedColumns)
    std::vector columnsNames = GetColumnsNames();
    std::vector clmnsNb = GetSelectedColumnsNumbers(columnsNames, selectedColumns);
    return GetFieldsFromSelectedColumnsNumbers(clmnsNb);


void DebugPrintVector(std::string vName, std::vector & v)

I know that "GetSelectedColumnsContent" depends on too complicated methods, but I thought that in this way, I could optimize the memory usage (I did not read the entire file in a separate column to perform the l & rsquo; # 39; operation). Please share your opinion, if this code at least a little follows the C ++ style moder?
Best regards!

Running the x86_64 Docker image on arm32

Situation: You want to deploy the Docker application on an ODROID XU4 (octa-core arm32).

Problem: Strong probability that the Docker image is in x86_64.

Question: Is it possible to run the x86_64 Docker image on an arm32 computer? If so, are there any configurations to be done (for example, Qemu)?