Copy all files with a .wav extension to a set of folders in a new directory using python

I want to copy all files in a directory with .wav extension to a new directory using python. Each file and all its duplicates should be copied to a unique folder after incrementing count(filename_1,filename_2, etc.). This means no. of folders in the new directory should be equal to no. of unique files(.wav) in the old directory. My problem is similar to this question and the only difference lies in the structure of the new directory.

E.g. Suppose I have the following files(.wav) in the old dir:

a.wav, a.wav, b.wav, b.wav, b.wav

Then the new dir should be created with the following folders:

a, b

a contains – a_1.wav, a_2.wav

b contains – b_1.wav, b_2.wav, b_3.wav

moving files from veracrypt store logs on windows?

If i move a file from an non-hidden encrypted drive to my main C drive, then move the original file to a hidden container on the encrypted drive, then wipe the original file with ccenhancer/secure erase, is there anyway to tell if the file came from the encrypted drive? Additionally, do softwares like ccenhancer/secure erase remove “recently viewed” logs from applications?

Why does Safari not recreate .localstorage files when they are deleted manually from LocalStorage folder?

It is my experience that when deleting the .localstorage files associated with a website from ~/Library/Safari/LocalStorage, they are not recreated when reloading the page. However, navigating to Preferences… > Privacy > Manage Website Data… and removing a site’s Local Storage from there (together with Cache, Cookies and Databases) deletes those files as well – while somehow allowing for them to be recreated when the page reloads. What is the difference?

As an example, these are the files I would delete from the LocalStorage folder for this website:

https_apple.stackexchange.com_0.localstorage  
https_apple.stackexchange.com_0.localstorage-wal  
https_apple.stackexchange.com_0.localstorage-shm

beginner – App to seed DB, generate CSVs, and export individually or in complex zip files Rails 5

I have an app whose sole purpose is to seed data files and add the data to different CSVs which are zipped and exported by the user. My application controller is filled with lines that all look like this:

  def export_tips
    @appointments = Appointment.order('service_id')
    send_data @appointments.to_csv_tips, filename: 'tips.csv'
  end

  def export_ticketpayments
    @appointments = Appointment.order('service_id')
    send_data @appointments.to_csv_ticketpayments, filename: 'ticketspaymentitems.csv'
  end

  def export_batchmanifest
    @batchmanifests = Batchmanifest.all
    send_data @batchmanifests.to_csv_batchmanifest, filename: "batch_manifest-#{Date.today}.csv"
  end

  def export_pets
    @clients = Client.all
    send_data @clients.to_csv_pets, filename: 'pets.csv'
  end

  def export_clients
    @clients = Client.all
    send_data @clients.to_csv_clients, filename: 'clients.csv'
  end

I have it in the application controller because I used it in multiple different areas including creating single CSV exports and creating complex zip files with multiple zips and CSVs inside.

Some things that I have tried to cleanup the code include:

  • Different variables of this:
    def csv_export (model, filename)
    @model.pluralize = (model.titleize).all
    send_data @model.pluralize.filename, filename: filename
    end
  • Having each one in its own controller (could not access them from different views and other controllers easily)
  • I also tried to figure out how to create my own module, but was unable to do so.

My application record is just as bad with repeated lines simply meant to export the CSVs:

      def self.to_csv_appointments
        attributes = %w(appointment_id location_id employee_id client_id child_id notes 
        has_specific_employee start_time end_time)
        CSV.generate(headers: true) do |csv|
          csv << attributes
          all.each do |appointment|
            csv << attributes.map { |attr| appointment.send(attr) }
          end
        end
      end
    
      def self.to_csv_appointmentservices
        attributes = %w(appointment_id service_id price duration)
        CSV.generate(headers: true) do |csv|
          csv << attributes
          all.each do |appointment|
            csv << attributes.map { |attr| appointment.send(attr) }
          end
        end
      end

      def self.to_csv_tickets
        attributes = %w(ticket_id location_id client_id ticket_status employee_id 
        employee_id start_time)
        headers = %w(ticket_id location_id client_id status employee_id 
        closed_by_employee_id closed_at)
         CSV.generate(headers: true) do |csv|
         csv << headers
          all.each do |appointment|
           csv << attributes.map { |attr| appointment.send(attr) }
          end
        end
      end

For the application record, I have tried similar methods as those listed for the application controller, but to no avail. Again, I use the code in application record instead of in the individual model files because I need to access these in multiple parts of the site.

The code from the application controller is used mostly in the static controller and buttons on the view files. I need the ability to create the file sets, as listed below, but also allow the user to export just one CSV.

Examples from static controller to built the zip files:

def create_appointments_zip
  file_stream = Zip::OutputStream.write_buffer do |zip|
    @appointments = Appointment.order('service_id')
    zip.put_next_entry "appointment_manifest.csv"; zip << File.binread("#{Rails.root}/app/assets/csvs/appointment_manifest.csv")
    zip.put_next_entry "appointments.csv"; zip << @appointments.to_csv_appointments
    zip.put_next_entry "appointment_services.csv"; zip << @appointments.to_csv_appointmentservices
    zip.put_next_entry "appointment_statuses.csv"; zip << @appointments.to_csv_appointmentstatuses
  end
  file_stream.rewind
  File.open("#{Rails.root}/app/assets/csvs/appointments.zip", 'wb') do |file|
    file.write(file_stream.read)
  end
end

 def export_salonset
    create_appointments_zip
    create_tickets_zip
    create_inventory_zip
    create_memberships_zip
    file_stream = Zip::OutputStream.write_buffer do |zip|
      @saloncategories = Saloncategory.all
      @salonservices = Salonservice.all
      @clients = Client.all
      @locations = Location.all
      @salonpricings = Salonpricing.all
      @staffs = Staff.order("location_id")
      zip.put_next_entry "batch_manifest.csv"; zip << File.binread("#{Rails.root}/app/assets/csvs/batch_manifest_simple_salon.csv")
      zip.put_next_entry "categories.csv"; zip << @saloncategories.to_csv_saloncategories
      zip.put_next_entry "clients.csv"; zip << @clients.to_csv_clients
      zip.put_next_entry "employees.csv"; zip << @staffs.to_csv_staff
      zip.put_next_entry "locations.csv"; zip << @locations.to_csv_locations
      zip.put_next_entry "pricings.csv"; zip << @salonpricings.to_csv_pricings
      zip.put_next_entry "services.csv"; zip << @salonservices.to_csv_salonservices
      zip.put_next_entry "appointments.zip"; zip << File.binread("#{Rails.root}/app/assets/csvs/appointments.zip")
      zip.put_next_entry "inventories.zip"; zip << File.binread("#{Rails.root}/app/assets/csvs/inventories.zip")
      zip.put_next_entry "tickets.zip"; zip << File.binread("#{Rails.root}/app/assets/csvs/tickets.zip")
      zip.put_next_entry "addonmappings.csv"; zip << File.binread("#{Rails.root}/app/assets/csvs/addonmappings.csv")
    end
    file_stream.rewind
    respond_to do |format|
      format.zip do
        send_data file_stream.read, filename: "salon_set.zip"
      end
    end
    file_stream.rewind
    File.open("#{Rails.root}/app/assets/csvs/salon_set.zip", 'wb') do |file|
      file.write(file_stream.read)
    end
  end

Link to my repository, if that is helpful
https://github.com/atayl16/data-wizard/blob/master/app/controllers/application_controller.rb
https://github.com/atayl16/data-wizard/blob/master/app/models/application_record.rb

I know there must be a better way than writing these same lines over and over. The code works, my site works (amazingly), but I would be embarrassed for any seasoned developer to see the repository without laughing. Any help is appreciated!

email – Importing 9000 eml files to Thunderbird .78

Folder containing 9069 .eml files sent by Windows Live Mail.

I need to add them to Thunderbird. I created sub-folder in Thunderbird Archieved items, selected all files in Windows Explorer and dragged files them to this folder into this folder.

Thunderbird shows icon with number of files and freezes about 30 seconds.
After that nothing happes.
I tried several times from and to different folders but problem presists.
Importing small number of .emf files works.

How to import 9000 eml files to Thunderbird ?

Using 32 bit Thunderbird 78.1.0 in 64-bit latest Windows 10

ImportExportTools plugin has notice that it works only with Thunderbird 14.0 – 60.

I asked it also in Thunderrbird support at https://support.mozilla.org/en-US/questions/1298295
but havent got answer.

enter image description here

8 – Views_pre_render How do I get a file’s path url if I only have the file name?

I have a dynamically created Views listing page where I display fields from “Collection Items” node type in this order:

  1. Item Title
  2. Item Status
  3. Date Stamp
  4. Document (uploaded file. I’m only getting the filename here)

My goal right now is to take the path of the uploaded document and use it as a link in the title. Like so

<a href="path/to/file"> Item Title </a>

I’m trying to achieve this with hook_views_pre_render(), but am stuck just testing out some code I found from here.

Here is my code:

use DrupalCoreRoutingRouteMatchInterface;
use DrupalcollectionsEntityCollectionsEntityInterface;
use DrupalcollectionsEntityCollectionsEntityType;

function collections_views_pre_render(&$view){
  $results = $view->result;
  $view_id = $view->storage->id();
  $current_display = $view->current_display;

  if(strpos($view_id,"collection__" ) > -1 && $current_display == "default_page"){
  foreach ($results as $key => $result){

    //prints 0123456
      echo $key;

      //breaks the site. Gets message "The website encountered an unexpected error. Please try again later."
      $parent_id_value =  $result->_entity->parent_id->getValue()(0)('value');
      $parent_type_value =  $result->_entity->parent_type->getValue()(0)('value');
      $media_field= $result->_entity->field_media_field->getValue()(0)('value');

    }
  }
}

Once I got the file’s path, I would then be trying the solution offered here:
https://www.drupal.org/forum/support/module-development-and-code-questions/2020-01-12/use-hook_views_pre_render-to-change

$link = DrupalCoreLink::createFromRoute($parent_id_value, 'entity.node.canonical', ('node' => $parent_id_value));

files – How to move a Drupal 8 site to a new server?

I have a Drupal 8 site on an Ubuntu 18.04 server and want to move my site to a new server with Ubuntu 20.04.

With Drupal 7 I did these steps

Create a set of keys (on the new server) :

$ sudo ssh-keygen -t rsa -b 4096 -C root@www-example-com
$ sudo cat /root/.ssh/id_rsa.pub

Add the public key (on the old server) :

$ sudo nano /root/.ssh/authorized_keys

Export the database (on the old server) :

mysqldump -u root -p www_example_com > /var/www/www-example-com/share/www-example-com_$(date +%F).sql

Repatriate the site file (on the new server) :

$ sudo scp -r -p root@xxx.xx.xxx.xx:/var/www/www-example-com/ /var/www/www-example-com/

Import the database (on the new server) :

mysql -u root -p www_example_com < /var/www/www-example-com/share/www-example-com_2020-08-10.sql

That’s all, with these intructions, the Drupal 7 site works.

The problem

Now with Drupal 8 there is Composer with a new folder and file tree present in the var directories and in home.

How to move a Drupal 8 site to a new server ?

postgresql – Backup files from FreeIPA client to a domain server using cron script

I am RHEL8 running machines on an IPA domain (application servers and one storage server). The application servers run Postgres docker containers with database data stored on the host application server. I’m trying to create a system for automated backup of database data on these application servers to a single remote storage server.

I can write a cron script to dump/copy the database files to a specific mount point (e.g., remote storage server), except that cron scripts run as root, which is not a user on the IPA domain.

Does IPA have some way of facilitating this?

What are some of the more secure methods of implementing this?

I thought about:

(1) creating a dedicated backup user on the IPA domain with restricted access but not sure how to bridge the gap with cron;

(2) providing access on a per-machine (per app server) basis (any user) to a particular folder on the storage server; not sure if that is a secure approach, or whether there are risks to this approach.

dresden files – How does being pulled into a demesne mechanically work?

Demesnes, are described as the following:

As a spirit that has been linked to the mortal world, you naturally create a space within the Nevernever tied to that place or concept. The space reflects the landscape of your “mind”.

Ghosts possess a major advantage in their demense, in comparison to their mortal haunt. But the question is: how do they get there, or get someone they are targeting into there to make use of that?

As shown in the description, it is tied to the mortal location but I’m not certain how mechanically the tie functions.

Is it a specific power required such as swift transition – which seems to be an automatic success, if only one per scene? Although it also seems like it would only apply to the ghost itself moving and not bringing anyone else along.

Or is it a matter of a ghost having to simply roll a skill (discipline?) – and a PC having to roll (conviction?) to resist the transition and stay out of the Nevernever? Treat it like any other combat roll?

S Gallery vault app messed up my files with AES encryption

I downloaded a calculator app by FishNet on Google play, it was an app where you can store media files in an app disguised as a calculator and can only be accessed by typing a passcode into the app. They hide the files using AES encryption. Here’s an example of a picture (original form) once its uploaded onto the app (encrypted form).

I put 938 files on the app; 732 images, 2 gifs and 204 videos. All together they was 2GB in total. After some time I released all of them back into normal gallery, then back on the app. After a while, my files were missing, and I asked the developers for help. They showed me a section on the app that said manual retrieval or automatic retrieval. I tried automatic but it said no encrypted files found.

I tried manual and they put all my lost files into a folder called LostFileRestore, but they all had strange names all starting with e-randomnumbersandletters.fileformat (This is the exact same name of a file in encrypted form only without the .fileformat at the end, which were mostly .mp4 or .jpg). They seem to have GUID names. When I tried opening the video file it fails, and its the same case with the image files too. I provided what they look like if i tried to open them in form of hyperlinks linking to screenshots I took.

I told them this, and they told me to make all the image files into JPG (some were originally PNG while some were JPEG, but majority were already JPG) and make all the video files MP4 (some were originally Webm) and then try manual retrieval again, and I complied. But the situation didn’t change.

They then told me to try automatic retrieval, but when I did the app said no encrypted files found. I told them this and they haven’t responded to me in over a month. Is it over Have I lost everything? I really need some of those files before September.

When encrypted, the media files where kept in a hidden folder and all had what looked like GUID names and were in an unidentified file format which most likely was binary file format.

Now they all have kept their encrypted names but are in jpg and mp4 formats and are all unviewable yet the file size is still the same. Is there a way to fully decrypt them and recover them?