why can i update a file belonging to root using sudo vi, but not add a line with the echo file "Thing" >>

Sudo elevates the process it calls, it does not elevate any of the current shell processing like redirection, globalization, etc.

File redirection >> /etc/httpd/conf.d/vhosts.conf is being processed by your current shell, which still works under your current privileges.

You can try something like this.

sudo bash -c 'echo "Include thing" >> /etc/httpd/conf.d/vhosts.conf'


echo "Include thing" | sudo tee -a /etc/httpd/conf.d/vhosts.conf

python – File size when parsing XML

The code below takes a directory of xml files and analyzes them in a csv file.
Currently, scanning around 60 XML is fast and the output is a Csv file of around 250 MB.

This is a real big file and the reason is that the columns are repeated. I repeat the columns for the reason that each element must have the information. in reading is on cases where Z048 had multiple lines of setdata, this is why the other columns in red had to be repeated.

enter description of image here

I plan to increase the number of xml to 5 KB, which means the csv file will be relatively large.

Ask this question to maybe get an answer if the size of my csv file can be reduced in one way or another. Even though when coding, I tried to think that I wanted my code to be fast and not produce large csv files.

from xml.etree import ElementTree as ET
from collections import defaultdict
import csv
from pathlib import Path

directory = 'path to a folder with xml files'

with open('output.csv', 'w', newline='') as f:
    writer = csv.writer(f)

    headers = ('id', 'service_code', 'rational', 'qualify', 'description_num', 'description_txt', 'set_data_xin', 'set_data_xax', 'set_data_value', 'set_data_x')

    xml_files_list = list(map(str, Path(directory).glob('**/*.xml')))
    for xml_file in xml_files_list:
        tree = ET.parse(xml_file)
        root = tree.getroot()

        start_nodes = root.findall('.//START')
        for sn in start_nodes:
            row = defaultdict(str)

            repeated_values = dict()
            for k,v in sn.attrib.items():
                repeated_values(k) = v

            for rn in sn.findall('.//Rational'):
                repeated_values('rational') = rn.text

            for qu in sn.findall('.//Qualify'):
                repeated_values('qualify') = qu.text

            for ds in sn.findall('.//Description'):
                repeated_values('description_txt') = ds.text
                repeated_values('description_num') = ds.attrib('num')

            for st in sn.findall('.//SetData'):
                for k,v in st.attrib.items():
                    row('set_data_'+ str(k)) = v
                for key in repeated_values.keys():
                    row(key) = repeated_values(key)
                row_data = (row(i) for i in headers)
                row = defaultdict(str)

I cannot download a zip file

I'm new to the forum, first of all hello everyone. I cannot download a zip file from the marketplace. What should I do ?


How to grayscale a MPO stereographic image file from a 3D camera? I want to use it as input for a CNC sculpture

Absolutely. MPO files are basically just a stereogram, like the old ViewMaster stereo image viewers. Different parts of the images presented to the left and right eyes are shifted laterally to give the illusion of 3D depth. What you would like to do is reverse engineer the 3D depth map from the pair of stereograms. Given a deep map, this is a short step to create 3D prints (which is beyond the scope of this site).

Some resources to go further:

Once you have a depth map, there are solutions to use it to create the appropriate file for bas-relief or CNC sculpture.

I suggest looking for topics such as "bas relief 3D printing from a depth map" or "lithographic 3D printing from a stereogram". There is a solution, but it will certainly take time and sweat to develop a solution if you do not want to pay for image depth mapping solutions.

Search for torrents containing a specific file name

From what I've seen, almost all torrent search sites only allow you to search for the torrent file name.

I want:

  • Find which torrents include a file with a particular file name.
  • Look for text in the torrent description.

Are these two types of research possible?

apache 2.4 – How to troubleshoot the GeoIP error: error opening the file /usr/share/GeoIP/GeoIP.mmdb

I am new to the GeoIP solution.
Some things have changed regarding legacy GeoIP dbs, which has made things a little more complicated.

I'm on AWS Linux

I installed GeoIP:

rpm -qa | grep GeoIP



apache / httpd is:

rpm -qa | grep httpd



I created an account on the maxmind website and filled out /etc/GeoIP.conf like this:


AccountID redacted

LicenseKey redacted

EditionIDs GeoLite2-ASN GeoLite2-City GeoLite2-Country


The errors I see in / var / log / httpd / error_log are like this:

Error opening /usr/share/GeoIP/GeoIP.mmdb file

Error opening /usr/share/GeoIP/GeoIP.mmdb file

Error opening /usr/share/GeoIP/GeoIP.mmdb file

Database traversal error for ipnum = 886277125 – Maybe the database is corrupt?

Database traversal error for ipnum = 886277125 – Maybe the database is corrupt?

Database traversal error for ipnum = 168300841 – Maybe the database is corrupt?

Database traversal error for ipnum = 168300643 – Could the database be corrupt?

Database traversal error for ipnum = 168300841 – Maybe the database is corrupt?

Database traversal error for ipnum = 0 – Could the database be corrupt?

So I checked if I am able to extract information from the database:

mmdblookup –file /usr/share/GeoIP/GeoLite2-Country.mmdb –ip xxx.xxx.xxx.x country names en


So, now we know that the GeoIP database contains data; we know it correctly identifies the country of origin of the IP address of our test server.
The problem at this point is that apache is not yet able to load the GeoIP database.

For me, the next steps in troubleshooting are not clear.

Also, I don't know how important this is, but this is being installed on a Jira / Confluence server, so the apache / httpd service listens on 443 then forwards traffic to the instance Java. I don't think it really matters because the traffic reaches Apache first. It seems that apache cannot access the geoip database for some reason. I tried to change the ownership of the database to apache: root apache and it didn't work either.

What are the next steps?

    (Wed Jan 22 21:14:25.057803 2020) (so:warn) (pid 13168) AH01574: module ssl_module is already loaded, skipping
    VirtualHost configuration:
    *:443                  redacted
    *:80                   redacted
    ServerRoot: "/etc/httpd"
    Main DocumentRoot: "/var/www/html"
    Main ErrorLog: "/etc/httpd/logs/error_log"
    Mutex default: dir="/var/run/httpd/" mechanism=default
    Mutex mpm-accept: using_defaults
    Mutex cache-socache: using_defaults
    Mutex authdigest-opaque: using_defaults
    Mutex watchdog-callback: using_defaults
    Mutex proxy-balancer-shm: using_defaults
    Mutex rewrite-map: using_defaults
    Mutex ssl-stapling-refresh: using_defaults
    Mutex authdigest-client: using_defaults
    Mutex lua-ivm-shm: using_defaults
    Mutex ssl-stapling: using_defaults
    Mutex proxy: using_defaults
    Mutex authn-socache: using_defaults
    Mutex ssl-cache: using_defaults
    PidFile: "/var/run/httpd/httpd.pid"
    Define: DUMP_VHOSTS
    Define: DUMP_RUN_CFG
    User: name="apache" id=48
    Group: name="apache" id=48

Java Thymeleaf how to replace a file

I am writing an application in encryption and decryption files. I can't download and save the file. I am sending the file from my html input correctly to the controller, inside the controller bytes are encrypted, but how then save this file in the same path?
It should look like: the user choosing the file to encrypt, by clicking on the "Encryption" button, then the chosen file is replaced by a new one (encrypted).

I have tried using .transferTo from the MultipartFile interface but it does not work.

Anyone have a clue to fix this problem?

Here is my html:

and my controller:

public class EncryptionController {

private EncryptionService encryptionService;

public encryptFile(@RequestParam("file") MultipartFile multipartFile) throws IOException {
    byte() bytes = multipartFile.getBytes();
    byte() encryptedBytes = encryptionService.encryption(bytes);

    File f = convert(multipartFile);


I will be really grateful for any help.

cassandra – Cassanra-Medusa ERROR: This error occurred when saving: [Errno 2] No such file or directory: & # 39; nodetool & # 39;

I use Cassandra 3.11 with Cassandra-Medusa 0.4.1
Cassandra located: / usr / share / cassandra

It shows:

medusa backup –backup-name = 22012020

(2020-01-22 16:38:38) INFO: Monitoring provider is noop
(2020-01-22 16:38:38) WARNING: is ccm : 0
(2020-01-22 16:38:38) INFO: Creating snapshot
(2020-01-22 16:38:38) INFO: Saving tokenmap and schema
(2020-01-22 16:38:38) INFO: Node local does not have latest backup
(2020-01-22 16:38:38) INFO: Starting backup
(2020-01-22 16:38:38) ERROR: This error happened during the backup: (Errno 2) No such file or directory: 'nodetool'
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/medusa/backup.py", line 274, in main
    cassandra, node_backup, storage, differential_mode, config)

terminal – Returns a single line per file with grep

Various posts in other forums have suggested that the best way to use grep to return a single line per file used -m 1, who is the --max-count option. However, when I write the next line, I get only one file, not one line per file:

grep -m 1 "library" ./ -R

Returns a single file on a single line:

.//results/fig/fig_functions.R:# library(plyr)


grep "library" ./ -R

Returns many files, each with several lines:

.//results/fig/fig_functions.R:# library(plyr)
.//results/fig/fig_functions.R:# library(grid)
(many more lines and files...)

I would like the order to come back all files containing text, but return only one line per file. Do i use grep incorrectly or is there another way to do it?