python – parse two csv files, calculate a product and sort the result by product

Given two csv files science_courses.csv and other_courses.csv, which has students name and their respective course points. Calculate the CGPA and sort by the same (highest to lowest CGPA).

sample file content:






  1. Open both files in and make a dictionary. ex:{studentA:(mark-1,mark-2..mark-n),studentB:(mark-1...mark-n)}
  2. Iterate through the dictionary of the list and calculate CGPA and store in a dictionary. ex:{studentA:cgpa, studentB:cgpa}
  3. Sort the results by cgpa (highest to lowest) and print


def get_all_courses(a,b):
    all_courses = {}
    with open(a, 'r') as sci, open(b, 'r') as othr:
        for i in sci:
            sc = i.strip('n').split(',')
            all_courses(sc(0)) = sc(1:)

        for j in othr:
            ot = j.strip('n').split(',')
            name = ot(0)
            if name in all_courses:
                all_courses(name) = all_courses.get(name) + ot(1:)
    return all_courses

def get_results():
    avg_marks = {}
    result = get_all_courses('science_courses.csv','other_courses.csv')
    for k,v in result.items():
        total = sum(map(float, v))
        avg = round(total/len(v),2)
        avg_marks(k) = avg
    for i in sorted(avg_marks.items(), key=lambda x: x(1), reverse=True): print i


Please suggest better alternatives to:

  1. simplify the book keeping of the marks extracted from files
  2. simplify/optimise the calculation and sorting based on cgpa.

linux – Speedtest Ookla No Timestamp in CSV output

So when running the official speedtest client from Ookla and outputting to a CSV file, I’ve noticed the output has no timestamp field. The JSON does, but I’m not particularly savvy with jq and trying to convert the JSON output to a CSV isn’t useful.

Is there a way to take the output and pipe it to a file with a timestamp in the front?

This is the output given as a JSON

{"type":"result","timestamp":"2021-07-22T16:14:17Z","ping":{"jitter":0.035999999999999997,"latency":3.9399999999999999},"download":{"bandwidth":117078051,"bytes":884657048,"elapsed":7601},"upload":{"bandwidth":117029963,"bytes":467614102,"elapsed":4006},"packetLoss":0,"isp":"CenturyLink","interface":{"internalIp":"","name":"eth0","macAddr":"E4:5F:01:2F:1D:39","isVpn":false,"externalIp":""},"server":{"id":10161,"name":"CenturyLink","location":"Orlando, FL","country":"United States","host":"","port":8080,"ip":""},"result":{"id":"64657421-d008-4053-9832-2d1a9b01b649","url":""}}

and this is the output of the CSV (with headers)

"server name","server id","latency","jitter","packet loss","download","upload","download bytes","upload bytes","share url"
"The Villages - The Villages, FL","25753","33.338","0.302","0","117318528","112406990","1488776432","1053747984",""
"CenturyLink - Orlando, FL","10161","4.013","0.399","0","76816660","112435444","1158108878","473391675",""
"CenturyLink - Orlando, FL","10161","3.533","0.407","0","115293486","97552291","1002647576","574510787",""

All I’m trying to do is to get the added timestamp data that gets outputted by JSON into a CSV format so I can further process.

Simple JSON to CSV in MS Power Automate problem

Good Afternoon,

I am trying convert the following JSON using the URL: to CSV vis MS Power Automate.

I use HTTP to get the above URL

And this is then where it goes wrong. I Parse the JSON using the following

"type": "object",
"properties": {
    "data": {
        "type": "array",
        "items": {
            "type": "array"
    "meta": {
        "type": "array",
        "items": {
            "type": "string"


which then i pass onto a CSV table which needs an array not an object.

Any help most apprecited


magento2 – How to get columns header of csv in custom import module

Hi I created custom module for import data by helping module (

In this case I need to get csv header names.
I tried several ways but it’s not working.

I tried getSource method but system gives source is not set .

If anyone know the answer please help me solve the issue


addtocart – How to keep ‘In stock’ even though the qty is 0 after a csv file import

We are currently running a cron job to import sku, qty, and price every 30 min.

If the qty for a product becomes 0, the stock status changes from ‘In stock’ to ‘Out of stock’ automatically, ‘Add to Cart’ button is gone on the product page, which makes customers impossible to add the part to the cart.
enter image description here
enter image description here

I also allowed the backorder in the configuration but no use.

enter image description here

I’d like to display the Add to cart button even though the qty is 0.

Can you please share your wisdom?

The magento ver. is 2.3.4.

Ler arquivo csv e enviar para um portal via selenium python

boa tarde!
Estou desenvolvendo um código, onde pego dados de um arquivo em csv para incluir em um portal, abaixo o código:
Porem não estou conseguindo incluir os dados do arquivo: sinnum, sinano, nome, cpf, datavencimento, valorboleto
Nos campos sinistro, anoSinistro, nome = pyautogui.write(“xxxxxx xxxxxxx”), cpf = pyautogui.write(“xxxxxx xxxxxxxx”), datavencimento = pyautogui.write(“xxxxxxxxxxxxx”) e
valor = pyautogui.write(“xxxxxx”)
Como faço isso? E também como faço um loop para percorrer todo o arquivo em csv?

import sys

import psutil

import logging

import logging.handlers

import os

import time

import csv

import uuid

import pyautogui

import openpyxl

from selenium import webdriver

from selenium.webdriver import ActionChains

from import Select

from selenium.webdriver.common.keys import Keys

from selenium.webdriver.common.action_chains import ActionChains

from datetime import datetime

from selenium.common.exceptions import NoSuchElementException

from selenium.common.exceptions import ElementNotInteractableException

from selenium.common.exceptions import TimeoutException

from selenium.common.exceptions import ElementClickInterceptedException

from selenium.common.exceptions import InvalidSessionIdException

from import Select

from import WebDriverWait

from import expected_conditions as EC

from import By


MIN_PYTHON = (3, 7)


if sys.version_info < MIN_PYTHON:
sys.exit(“Python %s.%s or later is required.n” % MIN_PYTHON)

class Sinistro:
def init(self,sinnum=None,sinano=None,nome=None,cpf=None,datavencimento=None,valorboleto=None):
self.__sinnum = sinnum
self.__sinano = sinano
self.__nome = nome
self.__cpf = cpf
self.__datavencimento = datavencimento
self.__valorboleto = valorboleto

def sinnum(self):
    return self.__sinnum

def sinnum(self, value):
    self.__sinnum = value

def sinano(self):
    return self.__sinano

def sinano(self, value):
    self.__sinano = value

def nome(self):
    return self.__nome

def nome(self, value):
    self.__nome = value

def cpf(self):
    return self.__cpf

def cpf(self, value):
    self.__cpf = value

def datavencimento(self):
    return self.__datavencimento

def datavencimento(self, value):
    self.__datavencimento = value

def valorboleto(self):
    return self.__valorboleto

def valorboleto(self, value):
    self.__valorboleto = value

class abresite:
def init(self):

    options = webdriver.IeOptions()
    options.file_upload_dialog_timeout = 2000
    options.ignore_protected_mode_settings = True
    self.driver = webdriver.Ie(executable_path="C:/Users/Documents/WEBDRIVE/IE/IEDriverServer.exe", options=options)
    self.username = input('digite usuario')
    self.senha    = input('digite sua senha')        

# def iniciasite(self):
#     self.flogin()

def iniciasite(self):
    action = ActionChains(self.driver)
    usuario = self.driver.find_element_by_id('username')

    senha = self.driver.find_element_by_id('password')
    gerenciador = self.driver.find_element(By.LINK_TEXT, "Gerenciador de Sinistros")

    janelas = self.driver.window_handles

    for janela in janelas:
        if janela not in janelainicial:

            janala_atual_1 = self.driver.window_handles(1)
            # print(janala_atual_1)      

            self.driver.find_element(By.ID, "ramoSinistro").click()
            self.driver.find_element(By.ID, "ramoSinistro").send_keys("123")
            self.driver.find_element(By.ID, "sinistro").click()
            self.driver.find_element(By.ID, "sinistro").send_keys("1234")

            self.driver.find_element(By.ID, "anoSinistro").click()
            self.driver.find_element(By.ID, "anoSinistro").send_keys("2001")
            self.driver.find_element(By.ID, "anoSinistro").send_keys(Keys.ENTER)

            self.driver.find_element(By.CSS_SELECTOR, ".centro:nth-child(2)").click()

            pyautogui.moveTo(538, 102)

            pyautogui.moveTo(538, 450)

            pyautogui.moveTo(100, 200)

            pyautogui.moveTo(88, 330)


            pyautogui.moveTo(88, 360)

            pyautogui.moveTo(448, 399)

            pyautogui.moveTo(220, 360)
            pyautogui.write("xxxxxxx xxxxxxxxx")
            pyautogui.moveTo(220, 390)

            pyautogui.moveTo(260, 423)

            pyautogui.moveTo(310, 448)
            pyautogui.hotkey('ctrl', 'a')

            pyautogui.moveTo(88, 510)

            window_before = self.driver.window_handles(2)

            # janala_atual_1 = self.driver.window_handles(1)

            # self.driver.refresh()

site = abresite()

python – Accept only CSV input else throw error in pandas

Im trying to write a code in pandas where the input file should be accepted only if it is in csv format else it should throw an error

import pandas as pd
import tkinter as tk
from tkinter import filedialog

print(" Please select the working directory ")

print("Select the csv file : ")

migration – Drupal migrate from CSV – add created node to menu parent

I have a migration creating nodes from a CSV file. This is not an export from another drupal site.

All imports as expected until I try and add the migrated nodes to a specified menu item.

In the following migration yml, ‘publications_create_nodes’ has successfully run and I have a collection of new nodes. I can create the menu items for each node with the following.

    plugin: default_value
    default_value: 'main'
    plugin: migration_lookup
      - publications_create_nodes
        - id
    plugin: concat
      - constants/path
      - '@nid'
  title: title
    plugin: menu_link_parent
      - 462
      - 'main'
      - '/publications'
    plugin: default_value
    default_value: 0
    plugin: default_value
    default_value: 0
    plugin: default_value
    default_value: 1
    plugin: default_value
    default_value: 'en'

But the problem is all these nodes are added to the top level of the main menu.

    plugin: menu_link_parent
      - 462
      - 'main'
      - '/publications'

states I require

  • plid
  • menu_name
  • parent_link_path

I got the plid by looking in the DB:

SELECT UUID FROM menu_link_content WHERE id IN (SELECT id FROM menu_link_content_data WHERE title = "publications");

then used the result from that to get:

SELECT mlid FROM menu_tree WHERE id =('menu_link_content:e7e5dcf2-110a-41ca-928d-5ff353a311fd');

BUT, the plid expected in the migration is from source, not destination and as I do not have a source menu item, I am not sure how to add to the link to a specific menu parent?

Ideally I would like to have the destination mlid as part of the CSV data so I can assign each generated node to an existing parent menu item. Otherwise I’m going to have about 600 nodes at the top level of the main menu and I’ll need to figure where they belong.

java – How Should I Process A CSV To REST?

The client wants to put a CSV file on an FTP server, have it processed then have an error file put back in a different directory.

We are only a small company so we can only afford to support Java as the language.
We also don’t have an FTP server so we are considering using Azure blob storage with a blob trigger and Function to act like it.
The actual processing of the data is done in a service behind a REST API.

During prototyping we used Apache Camel to watch a directory, split the file into lines, convert it into JSON and upload each line (in JSON) to the REST API.

The Azure blob trigger will now manage the monitoring part so we can skip the Camel directory watcher.

Now that Azure functions are in the mix there are several options for this process.

  1. Put the whole Camel application in the Function to process the whole file in the existing way
  2. Have 2 Functions where the first does the splitting and the second uploads each line
  3. Have the Function upload the whole file to a new API endpoint and do all the splitting and process in the REST API

Which of these would be the most appropriate scenario for this use case and what would be the pattern for storing the error file back on the FTP server/blob storage?

NB: This would be staying in production for about 5-10 years (although it would probably be updated/changed during that time.
It doesn’t need to be any more complex than the description as the processing of the data is not involved at this stage of the application.

It also doesn’t need to be extendable as the features are well defined in advanced as it is replacing a very old system that hasn’t really changed any functionality for a decade.

The max is likely to be 100k rows a month so it doesn’t need to scale out either.

QuestDB mass CSV import – java process Out Of Memory

I’m doing a mass import of CSV files (200M+ records) into QuestDB running in docker via bash script (loop over list of files). Over time i can see the memory usage of java process gradually increasing to the point of OOM. Even after terminating the import script early, the memory usage of java process stay at the same level until i relaunch the container.

Bash import script:

for table in "${tickdb_tables(@)}"; do
  symbol=$(echo $table| cut -d'_' -f 1)

  curl  -i -F 
  schema='({"name":"ts", "type": "TIMESTAMP", "pattern": "yyyy-MM-dd HH:mm:ss"},{"name":"symbol", "type": "SYMBOL"},{"name":"open","type":"FLOAT"},{"name":"high","type":"FLOAT"},{"name":"low","type":"FLOAT"},{"name":"close","type":"FLOAT"},{"name":"volume","type":"INT"},{"name":"timeframe","type":"SYMBOL"})' 
   -F data=@$symbol.csv "http://localhost:9000/imp?name=CANDLES&timestamp=ts"

  rm $symbol.csv

  sleep 5

Table create statement:

create table CANDLES (ts TIMESTAMP, symbol SYMBOL, open FLOAT, high FLOAT, low FLOAT,
                      close FLOAT, volume INT, timeframe SYMBOL) 
timestamp(ts) partition by MONTH;

Is there anything i’m missing here, or is it some potential bug/memory leak in QuestDb? (didn’t want to open issue until i’m sure i’m not doing something wrong)