How the daemon gets a transaction via hash

I am wondering how the daemon gets transaction data from a hash. I'm not asking how I can get it, i know i can get it via the rpc, i ask how the demon does it behind the scenes. Does it have a database of each transaction that took place and the block in which it was located? or does it search recursively in each block?
I am stumped on how it does it effectively. Thank you

hash – Why can't I crack NTLM hashes using Ophcrack?

I am trying to understand the way Windows computers store passwords through an exercise which is as follows:

  • Configure a fully patched Windows 10 Pro (64-bit) virtual machine in VMware Workstation Pro (completed)
  • Create 3 common users with very weak passwords (done)
  • Extract password hashes from the SAM database using PwDump7. The PwDump7 executable must be launched from a privileged command prompt (with administrative privileges) (finished)
  • Redirect the output of PwDump7 to a hashes.txt file (done)

From this step, I get the following file:

Administrateur:500:1572D4F95361A8BB91A573D91D0FECD3:F198938E100D4E18CEEC6FF1B77AF209:::
Invité:501:C3909CE10A2E26E43E14033ED7C252D3:8FCB137FA6D5907BEC96D82D5631CF14:::
:503:F12B8A9E3FA47FDC9C5098B04BB15FE2:FA1440E32F8F6B4B5F276A723209E0AE:::
:504:F1601AD372DB88DAADE8A5B92B097581:D23055FD3030A0C8A1B56A3689AAB745:::
Admin:1000:1AF4EF0E6E14CE97C8D1CCAD392C46D7:16FCC3441F3005923F649E8F9D1684F5:::
:1001:3A4C1FB7E7FF9EE2972AEC3F193D7552:3AF5C318F2268DD802E79D8A24E89959:::
:1002:D7366D17F427BDD3A29E4A589002ACD7:2FD2BF7C980AD86E4D8A0527A3E136DE:::
:1003:D2E2DE9405BC29AA60B0BF02E8E48819:1EC1DFA1A4CB0B642100697C419EBBB5:::

So :

  • Get the missing UserIDs from the following command: wmic useraccount get name,sid
  • Use it to complete the file obtained in the previous step

So at the end, I get the following PWDUMP file:

Administrateur:500:1572D4F95361A8BB91A573D91D0FECD3:F198938E100D4E18CEEC6FF1B77AF209:::
Invité:501:C3909CE10A2E26E43E14033ED7C252D3:8FCB137FA6D5907BEC96D82D5631CF14:::
DefaultAccount:503:F12B8A9E3FA47FDC9C5098B04BB15FE2:FA1440E32F8F6B4B5F276A723209E0AE:::
WDAGUtilityAccount:504:F1601AD372DB88DAADE8A5B92B097581:D23055FD3030A0C8A1B56A3689AAB745:::
Admin:1000:1AF4EF0E6E14CE97C8D1CCAD392C46D7:16FCC3441F3005923F649E8F9D1684F5:::
Martin:1001:3A4C1FB7E7FF9EE2972AEC3F193D7552:3AF5C318F2268DD802E79D8A24E89959:::
Jason:1002:D7366D17F427BDD3A29E4A589002ACD7:2FD2BF7C980AD86E4D8A0527A3E136DE:::
Shiela:1003:D2E2DE9405BC29AA60B0BF02E8E48819:1EC1DFA1A4CB0B642100697C419EBBB5:::

In the next step, I am supposed to break these hashes using Ophcrack. I open it, load the PWDUMP file, install the Vista Free tables, but when I run crack, none of the passwords are found. Here are the expected passwords:

  • Martin: apple
  • Jason: qwerty
  • Shiela: test

The Vista Free rainbow table is supposed to perform 2 types of attack: brute force for passwords of 1 to 4 characters and dictionary attack. So even if "qwerty" and "apple" are not in the dictionary (weird but why not), Ophcrack should at least find "test".

Note: the exercise comes from the last edition (2018) of a well-known certification, so it should not be overwhelmed.

It makes me think that there is something I do not understand in the way Windows stores passwords. By default, in Windows 10, LM hashes are supposed to be disabled, so why are they here in the PWDUMP file? Are these hashes salted in a way I don't know? If so, how can I get these salts?

Has Microsoft found a way to further scramble the contents of the SAM database?

So far, my Google search has returned only outdated documentation. Thanks in advance for any help you could give me.

api design – Appropriate way to put a fixed hash as authorization

I am implementing a system for a client where it asks me to use a fixed hash to protect the API as authorization. This fixed value will therefore be sent in the header of the HTTP call as "Authorization":"(the hash)".

Meanwhile, when I was looking for RFC implementations, I learned that Authorization: model was introduced by W3C in HTTP 1.0. Can anyone tell me what i am doing is wrong (go against this standard).

So I looked at other APIs using a fixed hash value, and I noticed that they were sending this in the URL parameters itself. for example: POST https://language.googleapis.com/v1/documents:analyzeEntities?key=API_KEY I would like to know what is the standard that they follow.

hash – Argon2id Configuration – Information security stack exchange

I read an article on usage Argon2id in C # here.

Here is the code they wrote (slightly modified):

using System;
using System.Diagnostics;
using System.Linq;
using System.Security.Cryptography;
using System.Text;
using Konscious.Security.Cryptography;   

namespace Playground
{
    class Program
    {
        // No. of CPU Cores x 2.
        private const int DEGREE_OF_PARALLELISM = 16;

        // Recommended minimum value.
        private const int NUMBER_OF_ITERATIONS = 4;

        // 600 MB.
        private const int MEMORY_TO_USE_IN_KB = 600000;

        static void Main(string() args)
        {
            var password = "SomeSecurePassword";               
            byte() salt = CreateSalt();
            byte() hash = HashPassword(password, salt);                

            var otherPassword = "SomeSecurePassword";                                
            var success = VerifyHash(otherPassword, salt, hash);                
            Console.WriteLine(success ? "Passwords match!" : "Passwords do not match.");                
        }

        private static byte() CreateSalt()
        {
            var buffer = new byte(16);
            var rng = new RNGCryptoServiceProvider();
            rng.GetBytes(buffer);

            return buffer;
        }

        private static byte() HashPassword(string password, byte() salt)
        {
            var argon2id = new Argon2id(Encoding.UTF8.GetBytes(password));
            argon2id.Salt = salt;
            argon2id.DegreeOfParallelism = DEGREE_OF_PARALLELISM;
            argon2id.Iterations = NUMBER_OF_ITERATIONS;
            argon2id.MemorySize = MEMORY_TO_USE_IN_KB;

            return argon2id.GetBytes(16);
        }

        private static bool VerifyHash(string password, byte() salt, byte() hash)
        {
            var newHash = HashPassword(password, salt);
            return hash.SequenceEqual(newHash);
        }
    }
}

I have the following questions:

  1. On Konscious.Security cryptography. README page, instead of argon2id.GetBytes(16), they use argon2.GetBytes(128) which returns a longer value.

Assuming the configurations are the same, 128 safer approach than the 16 one because it's longer?

  1. From what I understand, the more memory we leave Argon2id the more secure it will be against custom hardware attacks.

So I guess even if 40 iterations with 70 MB and 4 iterations with 600 MB take about the same time, the higher memory cost of this last configuration is justified because it is more secure. Is it correct?

passwords – Writing a simple SHA256 salted hash generator

I saw a video detailing how to write a simple savory hash program in C # here. Here is the code they wrote (slightly modified for console applications):

using System;
using System.Text;
using System.Security.Cryptography;

namespace MyApplication
{
    class Program
    {
        const int SALT_SIZE = 10;

        static void Main(string() args)
        {                                
            string salt = CreateSalt();
            string password = "securePassword";
            string hashedPassword = GenerateSHA256Hash(password, salt);

            Console.WriteLine("salt: " + salt);
            Console.WriteLine("hashedPassword: " + hashedPassword);                                   
        }

        private static string CreateSalt()
        {
            var rng = new RNGCryptoServiceProvider();
            var buffer = new byte(SALT_SIZE);
            rng.GetBytes(buffer);

            return Convert.ToBase64String(buffer);
        }

        private static string GenerateSHA256Hash(string input, string salt)
        {
            byte() bytes = Encoding.UTF8.GetBytes(input + salt);
            var hashManager = new SHA256Managed();
            byte() hash = hashManager.ComputeHash(bytes);

            return ByteArrayToHexString(hash);
        }

        private static string ByteArrayToHexString(byte() bytes)
        {
            StringBuilder sb = new StringBuilder(bytes.Length * 2);

            foreach (byte b in bytes)
                sb.AppendFormat("{0:x2}", b);

            return sb.ToString();
        }
    }
}

From what I've read online, salted hashes are one of the safest ways to store passwords. However, I have a few questions:

  1. I have read that it is not enough to hash once a salted password. You have to chop it thousands of times to make brutal forcing more difficult for attackers.

    Would doing something like below be safer, and what would it be like to repeat the hash?

    var hash = hashManager.ComputeHash(bytes);
    
    for (int i = 0; i < 10000; i++)
        hash = hashManager.ComputeHash(hash);
    

    I also read that you should also include salt when redesigning, but I don't understand how to add it properly.

  2. For the salt buffer size, is 10 a good number to use, or would a higher / lower number be safer (eg 16)?

  3. I take this with a grain of salt, but I read that SHA256 is no longer a safe choice because it is too fast, which means that the brute forces are faster to make.

    Does this mean that fast algorithms like SHA are obsolete and should be replaced by slower algorithms like bcrypt?

  4. I guess hex ropes are a safe way to store salted hashes. Is it correct?

  5. After applying all of the changes to the above questions (if applicable), would the above code be secure enough to be used in a production environment?

c – How to implement this hash table, can someone help me?

Thank you for responding to Stack Overflow!

  • Make sure you respond to the question. Give details and share your research!

But to avoid

  • Ask for help, clarification or respond to other responses.
  • Make statements based on opinion; save them with references or personal experience.

For more information, see our tips on writing correct answers.

hash – Why are hashes encoded in base16 (hex) rather than base58?

SHA256 hashes are encoded in ordinary binary in bitcoin protocol and block storage (32 bytes in short byte order when treated as integers, which effectively makes them "base256 "). Text hexadecimal encoding is simply used to display and enter hashes in the software. Hex is a widely used and understood format for representing byte arrays. Additionally, most cryptographic APIs already support parsing of large integer and other hexadecimal strings, but base58 isn't really widely used outside of Bitcoin.

It seems that there is little reason to change the way hash I / O is performed, as this would only add to the confusion over the use of a string. The purpose of the Base58Check strings was not only to shorten them (because Base64 could have been used for that), but also to eliminate certain characters that look alike or are incompatible with the URI format because bitcoin addresses are intended for sharing.

Using DD to get hash of a non-system partition encrypted by VeraCrypt

I am trying to use DD for Windows to get the hash of a non-system partition that has been encrypted via Veracrypt, but I have encountered a bit of problem.

The command I used to get the hash of the encrypted partition looks like this

dd if=\?DeviceHarddiskVolume11 of=hash_output.txt bs=512 count=1

And this command (in theory) should create a file called hash_output.txt which contains the encrypted hash which should, for example, look like this:

(Šö÷…o¢–n(¨hìùlŒ‡¬»J`

However, the output I get when executing the above DD command looks more like this:

fb55 d397 2879 2f55 7653 24a3 c250 14d3
3711 7109 e563 617f ab73 f11a 3469 33bb

Which of course is not the hash I expected so I hope someone could help me understand what I am doing wrong.

Some points to note:

  • I am 100% sure that the drive I select in the DD command is the correct drive.
  • There is only one encrypted partition on the drive that covers the entire size of the drive.
  • There is no physical / functional damage to the drive that could cause this problem.
  • This on a 1TB external drive that is connected via USB 3.0 (I tried other cables and ports).
  • The same DD command worked well for a test reader that I encrypted using the same parameters defined for this reader.

tls – OAuth2 – Sending a hash of your client_secret when using assignment of client credentials instead of secrecy

I am working on an API that I would like to be accessible internally by other servers as well as devices that I both consider to be confidential private clients. Devices are considered private customers because the client_secret is stored in an encrypted area which prevents unauthorized reading and modification (even if nothing is ever bulletproof)

For authentication, I would like to use OAuth2 with the client_credentials grant which seems to be a very good fit for these use cases. However, I wonder how flexible the standard is for sharing client_secret.

Basically, the RFC doesn't say much about sending your client ID / client secret, it just offers an example here: https://tools.ietf.org/html/rfc6749#section-4.4.2 which is very simple using the following header Basic authorization: base64 (client_id: client_secret)

In my opinion, it might be slightly safer when calculating a hash:

  1. the client requests a random from the server by sending its client_id
  2. the server responds with a random code (valid for 10 minutes, such as an authorization code)
  3. the client calculates a hash = sha256 (client_id, client_secret, code) and requests a token
  4. the server calculates the same hash, compares the client hash with the calculated hash and sends an access token if they match

This would add an extra layer of security in case https is somehow broken or if someone is able to read the header in one way or another.

However, that doesn't seem very consistent with OAuth2 and I don't really like to reinvent a standard.
Another option would be to create my own extension grant, I just wonder if it's really worth it, like no one seems to have done.

Also, if I want to share my API with a third-party app, I'm not sure it is a good thing to force them to use something that is not really standard.

hash map – Map of objects sharing characteristics

Please consider the following code:

@Test
public void test() {
    // This is the input:
    Map> map = new HashMap>() {{
        put("a", Arrays.asList("calculating", "label", "farm", "anger", "able", "aboriginal"));
        put("e", Arrays.asList("injure", "label", "anger", "teeny-tiny", "able", "mindless"));
        put("i", Arrays.asList("injure", "calculating", "teeny-tiny", "mindless", "aboriginal"));
    }};

    // This is my algorithm:
    Map> result = new HashMap<>();
    for (List values : map.values()) {
        for (String str : values) {
            Set l = result.getOrDefault(str, new HashSet<>());
            l.addAll(values);
            l.remove(str);
            result.put(str, l);
        }
    }

    // result contains this:
    // {
    //      able=(teeny-tiny, farm, calculating, aboriginal, label, mindless, anger, injure),
    //      teeny-tiny=(able, calculating, aboriginal, label, mindless, anger, injure),
    //      farm=(able, calculating, aboriginal, label, anger),
    //      calculating=(able, teeny-tiny, farm, aboriginal, label, mindless, anger, injure),
    //      aboriginal=(able, teeny-tiny, farm, calculating, label, mindless, anger, injure),
    //      label=(able, teeny-tiny, farm, calculating, aboriginal, mindless, anger, injure),
    //      mindless=(able, teeny-tiny, calculating, aboriginal, label, anger, injure),
    //      anger=(able, teeny-tiny, farm, calculating, aboriginal, label, mindless, injure),
    //      injure=(able, teeny-tiny, calculating, aboriginal, label, mindless, anger)
    // }
}

This represents a simplification of a problem I am struggling with.

In the first lines, you can see that I have a map whose keys are characteristics (here these are just letters) and the values ​​are lists of objects with these characteristics (here these are simply words containing the given letter).

The following lines do what I want to do: from the card, they extract a second card whose keys are each word and the values ​​are a list of other words sharing at least one characteristic.

So, for example, the value associated with the key farm contains label (because both contain a) but does not contain injure (because farm and injure don't have a, e or i in common).

Note that the result does not contain the original keys (a, e, i), I really don't care.

My question is: is there a way to speed things up?

For my real problem, I don't have 3 letters but a few thousand values; I don't have 10 words but around 1 million.

Secondary question: if there is a general name for this problem, I would be glad to know.