passwords – Writing a simple SHA256 salted hash generator

I saw a video detailing how to write a simple savory hash program in C # here. Here is the code they wrote (slightly modified for console applications):

using System;
using System.Text;
using System.Security.Cryptography;

namespace MyApplication
{
    class Program
    {
        const int SALT_SIZE = 10;

        static void Main(string() args)
        {                                
            string salt = CreateSalt();
            string password = "securePassword";
            string hashedPassword = GenerateSHA256Hash(password, salt);

            Console.WriteLine("salt: " + salt);
            Console.WriteLine("hashedPassword: " + hashedPassword);                                   
        }

        private static string CreateSalt()
        {
            var rng = new RNGCryptoServiceProvider();
            var buffer = new byte(SALT_SIZE);
            rng.GetBytes(buffer);

            return Convert.ToBase64String(buffer);
        }

        private static string GenerateSHA256Hash(string input, string salt)
        {
            byte() bytes = Encoding.UTF8.GetBytes(input + salt);
            var hashManager = new SHA256Managed();
            byte() hash = hashManager.ComputeHash(bytes);

            return ByteArrayToHexString(hash);
        }

        private static string ByteArrayToHexString(byte() bytes)
        {
            StringBuilder sb = new StringBuilder(bytes.Length * 2);

            foreach (byte b in bytes)
                sb.AppendFormat("{0:x2}", b);

            return sb.ToString();
        }
    }
}

From what I've read online, salted hashes are one of the safest ways to store passwords. However, I have a few questions:

  1. I have read that it is not enough to hash once a salted password. You have to chop it thousands of times to make brutal forcing more difficult for attackers.

    Would doing something like below be safer, and what would it be like to repeat the hash?

    var hash = hashManager.ComputeHash(bytes);
    
    for (int i = 0; i < 10000; i++)
        hash = hashManager.ComputeHash(hash);
    

    I also read that you should also include salt when redesigning, but I don't understand how to add it properly.

  2. For the salt buffer size, is 10 a good number to use, or would a higher / lower number be safer (eg 16)?

  3. I take this with a grain of salt, but I read that SHA256 is no longer a safe choice because it is too fast, which means that the brute forces are faster to make.

    Does this mean that fast algorithms like SHA are obsolete and should be replaced by slower algorithms like bcrypt?

  4. I guess hex ropes are a safe way to store salted hashes. Is it correct?

  5. After applying all of the changes to the above questions (if applicable), would the above code be secure enough to be used in a production environment?

source of results – The SharePoint 2016 query generator inserts "ContentTypeId" instead of "ContentType"

In on-premises SharePoint 2016, I was having trouble getting results based on a content type that I had created when trying to configure a result source to search for that type of content . I finally decided to see what I had done on a working SharePoint 2013 site, when I realized that using the result source query generator in 2016 , when I choose "ContentType" and click "Add a property filter", the query generator actually inserted "ContentTypeId", not "ContentType". It baffled me a little. I just wanted to post this, in case someone has a similar problem. Once I manually edited the inserted text to simply delete the "Id" at the end of "ContentType", the source of the results worked as I expected. Is this a bug in SharePoint 2016 or is it expected to be in one way or another? If so, where do you report bugs in SharePoint to Microsoft?

enter description of image here

python – Creation of a generator object with image augmentation to form convolutional neural networks with Keras

I'm currently self-learning on a Python generator object and I'm using it to generate training data and increase it on the fly, then feed it into convolutional neural networks.

Could someone help me revise my code? It works fine, but I need a review to make it more efficient and better structured. Also, how can I verify that using the generator will consume less memory (compared to just switching from a normal numpy array to the model)?

Thank you so much!

from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
import pandas as pd
import os
import cv2
import numpy as np
from sklearn.model_selection import train_test_split
import tensorflow as tf
from augment import ImageAugment

class Generator():
    def __init__(self, feat, labels, width, height):
        self.feat = feat
        self.labels = labels
        self.width = width
        self.height = height

    def gen(self):
        '''
        Yields generator object for training or evaluation without batching
        Yields:
            im: np.array of (1,width,height,1) of images
            label: np.array of one-hot vector of label (1,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height
        i=0
        while (True):
            im = cv2.imread(feat(i),0)
            im = im.reshape(width,height,1)
            im = np.expand_dims(im,axis=0)
            label = np.expand_dims(labels(i),axis=0)
            yield im,label
            i+=1

            if i>=len(feat):
                i=0


    def gen_test(self):
        '''
        Yields generator object to do prediction
        Yields:
            im: np.array of (1,width,height,1) of images
        '''
        feat = self.feat
        width = self.width
        height = self.height
        i=0
        while (True):
            im = cv2.imread(feat(i),0)
            im = im.reshape(width,height,1)
            im = np.expand_dims(im,axis=0)
            yield im
            i+=1


    def gen_batching(self, batch_size):
        '''
        Yields generator object with batching of batch_size
        Args:
            batch_size (int): batch_size
        Yields:
            feat_batch: np.array of (batch_size,width,height,1) of images
            label_batch: np.array of (batch_size,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height
        num_examples = len(feat)
        num_batch = num_examples/batch_size
        X = ()
        for n in range(num_examples):
            im = cv2.imread(feat(n),0)
            try:
                im = im.reshape(width,height,1)
            except:
                print('Error on this image: ', feat(n))
            X.append(im)
        X = np.array(X)

        feat_batch = np.zeros((batch_size,width,height,1))
        label_batch = np.zeros((batch_size,labels.shape(1)))
        while(True):
            for i in range(batch_size):
                index = np.random.randint(X.shape(0),size=1)(0) #shuffle the data
                feat_batch(i) = X(index)
                label_batch(i) = labels(index)
            yield feat_batch,label_batch

    # def on_next(self):
    #     '''
    #     Advance to the next generator object
    #     '''
    #     gen_obj = self.gen_test()
    #     return next(gen_obj)
    #
    # def gen_show(self, pred):
    #     '''
    #     Show the image generator object
    #     '''
    #     i=0
    #     while(True):
    #         image = self.on_next()
    #         image = np.squeeze(image,axis=0)
    #         cv2.imshow('image', image)
    #         cv2.waitKey(0)
    #         i+=1

    def gen_augment(self,batch_size,augment):
        '''
        Yields generator object with batching of batch_size and augmentation.
        The number of examples for 1 batch will be multiplied based on the number of augmentation

        augment represents (speckle, gaussian, poisson). It means, the augmentation will be done on the augment list element that is 1
        for example, augment = (1,1,0) corresponds to adding speckle noise and gaussian noise
        if batch_size = 100, the number of examples in each batch will become 300

        Args:
            batch_size (int): batch_size
            augment (list): list that defines what kind of augmentation we want to do
        Yields:
            feat_batch: np.array of (batch_size*n_augment,width,height,1) of images
            label_batch: np.array of (batch_size*n_augment,num_labels)
        '''
        feat = self.feat
        labels = self.labels
        width = self.width
        height = self.height

        num_examples = len(feat)
        num_batch = num_examples/batch_size
        X = ()
        for n in range(num_examples):
            im = cv2.imread(feat(n),0)
            try:
                im = im.reshape(width,height,1)
            except:
                print('Error on this image: ', feat(n))
            X.append(im)
        X = np.array(X)

        n_augment = augment.count(1)
        print('Number of augmentations: ', n_augment)
        feat_batch = np.zeros(((n_augment+1)*batch_size,width,height,1))
        label_batch = np.zeros(((n_augment+1)*batch_size,labels.shape(1)))

        while(True):
            i=0
            while (i<=batch_size):
                index = np.random.randint(X.shape(0),size=1)(0) #shuffle the data
                aug = ImageAugment(X(index))
                feat_batch(i) = X(index)
                label_batch(i) = labels(index)

                j=0
                if augment(0) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_speckle_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                if augment(1) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_gaussian_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                if augment(2) == 1:
                    feat_batch((j*n_augment)+i+batch_size) = aug.add_poisson_noise()
                    label_batch((j*n_augment)+i+batch_size) = labels(index)
                    j+=1

                i+=1


            yield feat_batch,label_batch

def CNN_model(width,height):
    # #create model
    model = Sequential()
    model.add(Conv2D(64, kernel_size=3, activation="relu", input_shape=(width,height,1)))
    model.add(Conv2D(32, kernel_size=3, activation="relu"))
    model.add(Flatten())
    model.add(Dense(labels.shape(1), activation="softmax"))

    model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=('accuracy'))
    return model


if __name__ == "__main__":
    input_dir = './mnist'
    output_file = 'dataset.csv'

    filename = ()
    label = ()
    for root,dirs,files in os.walk(input_dir):
        for file in files:
            full_path = os.path.join(root,file)
            filename.append(full_path)
            label.append(os.path.basename(os.path.dirname(full_path)))

    data = pd.DataFrame(data={'filename': filename, 'label':label})
    data.to_csv(output_file,index=False)

    labels = pd.get_dummies(data.iloc(:,1)).values

    X, X_val, y, y_val = train_test_split(
                                            filename, labels,
                                            test_size=0.2,
                                            random_state=1234,
                                            shuffle=True,
                                            stratify=labels
                                            )

    X_train, X_test, y_train, y_test = train_test_split(
                                                        X, y,
                                                        test_size=0.025,
                                                        random_state=1234,
                                                        shuffle=True,
                                                        stratify=y
                                                        )

    width = 28
    height = 28

    test_data = pd.DataFrame(data={'filename': X_test})


    image_gen_train = Generator(X_train,y_train,width,height)
    image_gen_val = Generator(X_val,y_val,width,height)
    image_gen_test = Generator(X_test,None,width,height)


    batch_size = 900
    print('len data: ', len(X_train))
    print('len test data: ', len(X_test))

    #augment represents (speckle, gaussian, poisson). It means, the augmentation will be done on the augment list element that is 1
    #for example, augment = (1,1,0) corresponds to adding speckle noise and gaussian noise
    augment = (1,1,1)
    model = CNN_model(width,height)

    model.fit_generator(
                        generator=image_gen_train.gen_augment(batch_size=batch_size,augment=augment),
                        steps_per_epoch=np.ceil(len(X_train)/batch_size),
                        epochs=20,
                        verbose=1,
                        validation_data=image_gen_val.gen(),
                        validation_steps=len(X_val)
                        )
    model.save('model_aug_3.h5')
    model = tf.keras.models.load_model('model_aug_3.h5')

    #Try evaluate_generator
    image_gen_test = Generator(X_test,y_test,width,height)
    print(model.evaluate_generator(
                            generator=image_gen_test.gen(),
                            steps=len(X_test)
                            ))

    #Try predict_generator
    image_gen_test = Generator(X_test,None,width,height)
    pred = model.predict_generator(
                            generator=image_gen_test.gen_test(),
                            steps=len(X_test)
                            )
    pred = np.argmax(pred,axis=1)
    # image_gen_test = Generator(X_test,pred,width*3,height*3)
    # image_gen_test.gen_show(pred)
    wrong_pred = ()
    for i,ex in enumerate(zip(pred,y_test)):
        if ex(0) != np.argmax(ex(1)):
            wrong_pred.append(i)
    print(wrong_pred)

    # for i in range(len(X_test)):
    #     im = cv2.imread(X_test(i),0)
    #     im = cv2.putText(im, str(pred(i)), (10,15), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
    #     print(i)
    #     cv2.imshow('image',im)
    #     cv2.waitKey(0)
```

Finite groups $ p $ of co-class $ 3 $, class at least $ 4 $ and generator growth controlled

I am trying to prove the following comment (Ref. Https://link.springer.com/article/10.1007/s00605-016-0938-5 Page-684, Rmk3.2):

Let $ G $ to be finished $ p $– co-class group $ 3 $, class $ geq 4 $. so $ G $ satisfied $ d (Z_2 (G) / {Z (G)}) = d (G) d (Z (G)) $ if and only if $ d (G) = $ 2. Here $ d $ denote the rank of the corresponding group and co-class is $ n-c $ if $ | G | = p ^ n $ and $ G $ class $ c $.

My attempt is to link the question to the discussion in the thread: Groups in which the lower central series and the upper central series coincide

"$ Rightarrow $"Assuming the condition, we have $ d (Z_2 (G) / {Z (G)}) geq 2 $. Suppose if possible $ | Z_2 (G) / {Z (G)} | geq p ^ 3 $. Then we can directly verify that $ G / Z_ {c-1} (G) cong C_p times C_p, Z_2 (G) / {Z (G)} cong C ^ 3_p, Z (G) cong C_p $ and $ Z_i (G) / {Z_ {i-1} (G)} cong C_p $ for the remaining upper central quotients. Now since $ G $ and $ G / {Z (G)} $ has the same class, using Lem.4.1.13 (Ref. https://books.google.co.in/books/about/The_Structure_of_Groups_of_Prime_Power_O.html?id=34khoLiyP_QC&redir_esc=y) $ Z (G) = gamma_c (G) $. At this point proving that $ G $ is a UL group (i.e. the lower and upper central series of $ G $ coincide) would imply that side. However, we do know that $ gamma_ {c + 1-i} (G) subseteq Z_i (G) $ (i.e. the gamma groups are smaller).

I'm stuck here. Ideas?

gpu – What is the best Bitcoin Vanity address generator?

Can anyone suggest a Bitcoin Vanity address generator that can generate at least six characters (ideally case sensitive). It would be even better if the generator could create seven or more characters. I don't mind paying a little, but I would prefer the service to be free.

I'm looking to create a vanity address like 1Bitcoin... but my system is not powerful enough. I probably need one or more GPU cards.

There was an article on this subject several years ago; I would like something more contemporary if possible.

Thank you.

Article manager / content generator

To have more visibility on others in my language – Portuguese, does the content generator for articles also have to be in Portuguese?

Random – What is the probability that a pseudo-random number generator will generate a long sequence of similar numbers?

What is the probability that a pseudo-random number generator will generate a long sequence of similar numbers? "Similar numbers" can be the same numbers or numbers from a given range.

For example, if we consider the PRNG algorithm as a simple counter counting from 0 tom MAX, the distribution is uniform and there is a guarantee not to repeat the numbers in a sequence. So not repeating the numbers does not break the consistency. But that probably breaks the odds, doesn't it? To what extent? If this is the case, does that mean that the better the algorithm, the less we are guaranteed not to generate similar numbers in sequence?

I am particularly interested in the responses regarding Mersenne Twister as the most popular PRNG in programming language implementations. It would also be great to know how things work in the crypto-secure PRNGs of operating systems – Yarrow (macOS), Fortuna (FreeBSD) or ChaCha20 (Linux).

SITEMAP generator | Web Talk Hosting

Totally, here is one of the ones I use:
https://github.com/***oftware/python-sitemap

You will just run it on your local ip / domain. Keep in mind that you will want to limit it a little to reduce the load, but probably less than you would with a remote system.

Is this form generator a valid composite GoF?

Looking for a good real-world PHP example, I found this "composite" example using:

  • FormElement as Component
  • Fieldset and Form as containers
  • Entry as Leaf

(this is my code UML):

enter description of image here

So, is this form generator a valid composite GoF?

BLOGGER Online Gift Card Generator Landing Pages for CPA Offers for $ 10

BLOGGER Online Gift Card Generator Landing Pages for CPA Offers

Hi.
You no longer need to buy domain names and hosting to promote your cpa offers.
You can promote your CPA offers 100% for free using the BLOGGER professional landing pages.
I'll give you the best, professional, highly convertible landing page for BLOGGER. Landing pages available: # 1. Online generator of Facebook gift cards.
# 2. Online generator of Amazon gift cards.
# 3. Online generator of Google Play gift cards.
# 4. Online generator of Steam gift cards.
# 5. Online generator of Xbox gift cards.
# 6. Online iTunes gift card generator.
#seven. Online generator of PlayStation gift cards.
# 8. Ebay online gift card generator. I will guide you step by step to configure your Blogger. For more information do not hesitate to contact me.
Here is an example:
Https://cardsgiftamazon.blogspot.com

Thank you.

.