javascript – Parse proxy strings to array of objects

The goal of this module is to take in a string (user input, someone copy and pasting a large list of proxies) and parse it into an array of objects.

I appreciate any and all feedback, but I’m specifically interested in improving the stringToProxy function.

Namely I’m unsure about mutating the string parameter, seems like there could be a better solution. And the way I’m declaring the proxy at the beginning of the function feels wrong to me (using As) but I couldn’t think of any better solution. I also feel as though I may be overusing string.split().

Thanks for your time!

interface Proxy {
    host: string;
    port: number;
    auth?: {
        username: string;
        password: string;
    protocol?: string;

function parse(string: string): Proxy() {
    const rawArray = stringToArray(string);
    const proxyArray = => stringToProxy(string));
    return proxyArray;

function stringToArray(string: string): string() {
    return string.trim().replace(/( ,n)+/g, ",").split(",");

function stringToProxy(string: string) {
    const proxy = {} as Proxy;

    if (string.includes("://")) {
        const (protocol, remainder) = string.split("://");
        proxy.protocol = protocol;
        string = remainder;
    if (string.includes("@")) {
        const (auth, remainder) = string.split("@");
        const (username, password) = auth.split(":");
        proxy.auth = { username, password };
        string = remainder;
    const (host, port) = string.split(":");
    proxy.port = parseInt(port); = host;

    return proxy;

export = parse;

const parse = require("parse-proxy");


  { host: '', port: 80 },
  { host: '', port: 80 },
  { host: '', port: 80 }


    protocol: 'https',
    auth: { username: 'user', password: 'pass' },
    host: '',
    port: 8080
    protocol: 'https',
    auth: { username: 'user', password: 'pass' },
    host: '',
    port: 3128

algorithms – Find optimal alignment between 2 strings with gap cost function

I have a homework question that I trying to solve for many hours without success, maybe someone can guide me to the right way of thinking about it.

The problem:

Given two strings S1 and S2, find the score of their optimal global alignment with gaps.
The gap costs are given by a general function-𝑤(𝑘). It is known that for gaps lengths-𝑘 ≥ 𝑑, 𝑤(𝑘) equals a constant C.

Suggest an algorithm solving the problem with space O(min{|S1|,|S2|}*d) and time O(|S1|*|S2|*d).

Instruction: When choosing the optimal gap length at each matrix entry, process separately the gaps
of a length less than d and the longer gaps. Store in each matrix entry the optimal value obtained by
using the longer gaps, in addition to the regular optimal value.

Now we learned the following 2 algorithms:

Alignment with gaps where we do not know anything about the cost function:
enter image description here

Alignment with affine gap cost function
enter image description here

my solution:

I know that I have to use a d-rows table in order to meet the space requirements,
and to use both methods but I’m having troubles to combine it into one recursive formula,
Here is what I have done so far:
enter image description here

But I’m not sure how to include the cost for elongating an existing gap that is longer than d, and not even sure how to check if my recursive formula is correct.
Any help would be appreciated!

c – Não consigo copiar um vetor de strings para um vetor de strings em uma struct

Estou a horas quebrando cabeça e não consigo resolver.
Quero copiar uma string nova e coloca-la na primeira posição de um vetor de strings em uma struct. Porém, quando tento copiar o novo valor que vai ser adicionado a primeira posição simplesmente da um erro (“error: assignment to expression with array type”). Estou começando agora em programação, então desde já me perdoe pelas leiguices.

Segue o codigo abaixo:

#include <stdio.h>

typedef struct{
    char nome(30);
    int rg;

PESSOA *alocaMemoria(int tam){
    PESSOA *v;


    return v;

void adicionaValor(PESSOA *vetor, int tam){
    int i;

        printf("Nome: ");
        scanf("%s", &vetor->nome);
        printf("RG: ");
        scanf("%d", &vetor->rg);

void mostraVetor(PESSOA *vetor, int tam){
    int i;
    PESSOA *pVetor;


        printf("Nome: %sn", pVetor->nome);
        printf("RG: %dn", pVetor->rg);

void adicionaInicio(PESSOA *vetor, int tam, char *nome, int rg){
    PESSOA *novoVetorAux;



int main(){

    PESSOA *vetor;
    int tam, op, rg;
    char nome(30);

    printf("Selecione a opcao desejada: ");
    scanf("%d", &op);

    printf("1 - Inicia vetorn");
    printf("2 - Adiciona valores as casas do vetorn");
    printf("3 - Adiciona valores a primeira casa do vetorn");
    printf("6 - Mostra Vetor em ordemn");

        case 1:
            printf("Tamanho do vetor: ");
            scanf("%d", &tam);


        case 2:

            adicionaValor(&vetor, tam);

        case 3:
            printf("Nome: ");
            scanf("%s", &nome);
            printf("RG: ");
            scanf("%d", &rg);

            adicionaInicio(&vetor, tam, &nome, rg);

        case 6:
            mostraVetor(&vetor, tam);

return 0;

strings – Is there a shorter/better way to do this simple problem in Swift?

I am learning swift and then I came across this problem.
Converting each start letter to capitalized form if its lowercased.

func upperCaseFirstCharacter(str:String){
    let myArr = str.components(separatedBy: " ")
    var finalStr :String=""
    for word in myArr {
        let myStr = word.replacingCharacters(in: ...str.startIndex, with: word.first?.uppercased() ?? "")
        if myArr.last != word{
            finalStr.append(" ")

discrete mathematics – How to sort binary strings

I have to sort lexicographically binary strings (words over {0, 1}) . If v and w are two binary strings of length k, then v is called lexicographically smaller than w if there is an index i ∈ {1,…,k}, so that the first i-1 characters of v and w are identical, the i-th character of v has the value 0 and the i-th character of w has the value 1. How can I show that n binary strings of length k can be sorted lexicographically in (linear) time O(kn).

google sheets – How to shrink lists of strings so that all consecutive values are represented by “A to D” instead of as “A, B, C, D”

Quite tricky to phrase this question but I’m asking getting times ranges of when people are available in 3 hours chunks. They can answer with any combination of "00:00 to 03:00", "03:00 to 06:00", "06:00 to 09:00", "09:00 to 12:00", "12:00 to 15:00", "15:00 to 18:00", "18:00 to 21:00", "21:00 to 00:00", and "All Day". In my current “solution” I replace these strings as numbers, 1 through 8 with “All Day” just being an overwrite and check if all the numbers are consecutive and if so, concatenate two pre-determined strings depending on what’s the first number and what’s the last numbers.

If, for instance, someone replied with the following answer: "06:00 to 09:00, 12:00 to 15:00, 15:00 to 18:00, 18:00 to 21:00", how do I properly shrink that to "06:00 to 09:00, 12:00 to 21:00"? In my solution the formula just fails since they’re not entirely consecutive. It will just return the original input string.

I think I probably need to address every answer as an array and somehow iterate through to see which values are consecutive, saving that to a separate array and then just replacing the text back to a readable format in another cell but I have no clue how I would go about comparing the first number to the second number without an obscene number of IF statements.

Octave: How to turn a vector of integers into a cell vector of strings?

In Octave:

Given a vector

a = 1:3

How to turn vector “a” into a cell vector of strings


so that I can use strcat('x',a) to get

ans =
  (1,1) = x1
  (1,2) = x2
  (1,3) = x3

This question is a follow-up from cell array, add suffix to every string which could not help me, since strcat('x',{'1','2','3'}) works, but strcat('x', num2cell(1:3)) does not.

Also, when using the Matlab function string(a) according to Matlab: How to convert cell array to string array?, I get:

“The ‘string’ function is not yet implemented in Octave.`

Como funcionaria um algoritmo para prevenir tentativas de enganar bloqueios de palavras (strings)?

Digamos eu crie uma aplicação que permita a criação de registros arbitrários (não importa o assunto). No entanto, por alguma razão, decido bloquear a utilização da palavra batata no título do registro.

No entanto, o usuário pode, ao invés de utilizar batata, colocar bat@t@. Desse jeito, assumindo que o bloqueio seja implementado levianamente, como através de simples comparação de igualdade, nenhum erro ocorrerá — isto é, o bloqueio terá sido evitado. O usuário terá criado um registro (com a ideia de palavra que bloqueei, mas não exatamente igual a string que bloqueei).

Uma outra opção seria utilizar o caractere ZERO WIDTH SPACE (U+200B) entre qualquer caractere da palavra, de modo a também burlar uma comparação leviana.

const batata = 'batata';
const fakeBatata = 'bat' + 'u200B' + 'ata';

console.log(batata, fakeBatata); // Parecem iguais.
console.log(batata === fakeBatata); // Mas são diferentes.

Em relação ao caractere ZERO WIDTH SPACE (U+200B), a solução que penso é relativamente simples, mas não ideal. Bastaria bloquear qualquer utilização desse tipo de caractere (através de um blacklist, por exemplo).

Eu realmente não consigo pensar em um bom jeito de resolver esse problema. Também não sei o nome “formal” do problema e nem do hipotético algoritmo que amenize o problema.

É sequer possível resolver isso de forma eficiente? Se sim, como funcionaria o algoritmo?

Não faço questão de um algoritmo implementado, mas sim de uma orientação. No entanto, além da descrição acima, uma implementação (simples, se for um problema muito complexo) em alguma linguagem como JavaScript ou Python seria de ajuda também. 🙂

Algorithm – Number of strings containing every string of a given set a strings

I don’t see an easy way to efficiently do this. The closest I’ve come is the following (works for Python 3.9):

from typing import Optional

Template = list(Optional(str))

def possible_match(template_item: Optional(str), fixed_item: str) -> bool:
    Can the length-1 string fixed_item fit in the slot template_item?
    return template_item is None or template_item == fixed_item

def fill_templates(base_template: Template, fixed: str) -> Iterator(Template):
    Fill a string in all the slots that it fits. For example, with
    a template of **ab and a fixed string ca, we would get:
    caab and *cab
    for i in range(len(base_template) - len(fixed) + 1):
        if all(possible_match(template_item, fixed_item) for template_item, fixed_item in zip(base_template(i:), fixed)):
            yield base_template(:i) + list(fixed) + base_template(i + len(fixed):)

def fill_all_templates(base_template: Template, S: list(str)) -> Iterator(Template):
    Recursively fill all the string in all the slots they fit.
    if S:
        fixed, *rest_S = S
        for new_template in fill_templates(base_template, fixed):
            yield from fill_all_templates(new_template, rest_S)
        yield base_template

def generate_possible_solutions(template: Template, alphabet: frozenset(str)) -> Iterator(str):
    Recursively generate all the solutions from a single template. For example,
    given a template of *bb* and an alphabet of {a, b}, this would generate:
    abba, abbb, bbba and bbbb
    if template:
        first, *rest = template
        if first is None:
            for solution in generate_possible_solutions(rest, alphabet):
                for letter in alphabet:
                    yield letter + solution
            for solution in generate_possible_solutions(rest, alphabet):
                yield first + solution
        yield ''

def generate_solutions(templates: Iterable(Template), alphabet: frozenset(str)) -> Iterator(str):
    Generate all the solutions from a number of templates.
    Note that this will contain duplicates. For example, given
    the templates *a and a* and the alphabet {a, b} this will
    aa, ba, aa and ab
    for template in templates:
        yield from generate_possible_solutions(template, alphabet)

def black_beans(S: list(str), l: int, alphabet: frozenset(str)) -> frozenset(str):
    return frozenset(generate_solutions(fill_all_templates((None) * l, S), alphabet))

if __name__ == '__main__':
    example = black_beans(('ab', 'ca'), 4, {'a', 'b', 'c'}))
    # => frozenset({'cabb', 'caba', 'acab', 'cabc', 'ccab', 'caab', 'bcab', 'abca'})
    # => 8

This will generate a lot of duplicates, but if the strings in S are long enough, it should be more efficient than brute-forcing it.

It works by recursively building a sequence of “templates”, which are lists that contain either a character or a wild-card, then expanding the templates to all possible combinations, and lastly removing all of the duplicates. For the example no duplicates actually ever get generated, so it doesn’t seem to be doing too much extra work if that is a representative example.

python – Problemas en strings con las barras diagonales, Pyhton

les explico mi código y mi problema:
El código carga una imagén, a través de una ruta completa, osea C:userseliaspicturesetc. Por lo que sé, Python no admite “” en la imagen, sino que hay que cambiarlo por “/”.

direccion_imagen = input("Dirección imagén {}: ".format(i + 1))

direccion_imagen_l = direccion_imagen.replace("", "/")

imagen = os.path.join(os.getcwd(), direccion_imagen_l)

Me dá un error en el replace, desde ya muchas gracias