❕NEWS – Two Criminals Arrested In Mexico For Illegal Bitcoin Transaction | NewProxyLists

Mexican law enforcement agencies arrested two criminals named Ignacio Santoyo and Hector Ortiz for illegally conducting bitcoin transactions. Both of the culprits bought too many bitcoins and their transactions were flagged as suspicious which made it easier for authorities to track them down. Currently, cryptocurrency exchanges located in Mexico are supposed to report transactions that are above $2,830. More such criminals exist in darknet marketplaces where they are using bitcoin for bad purposes and I hope the authorities arrest them…

raw transaction – rust-bitcoin Non-canonical DER signature

I am trying to create a raw transaction spending a P2PKH output using rust-bitcoin, but whenever I am pushing the transaction to the testnet I receive the following error mandatory-script-verify-flag-failed (Non-canonical DER signature)
My signing code looks like this:

/// Sign a bitcoin transaction spending P2PKH outputs
/// 
/// # Arguments
/// 
/// * `tx` the transaction to be signed
/// * `script_pubkeys` list of script pubkeys of the inputs (has to be indexed in the same order as the inputs in the transaction)
/// * `skeys` list of secret keys with which to sign the inputs (has to be indexed in the same order as the inputs in the transaction)
/// * `pkeys` list of public keys for the inputs which are spent (has to be indexed in the same order as the inputs in the transaction)
/// * `curve` reference to the elliptic curve object used for signing
pub fn sign_transaction(tx : Transaction, script_pubkeys : Vec<Script>, skeys : Vec<PrivateKey>, pkeys: Vec<PublicKey>, curve : &Secp256k1<All>) -> Transaction {
    let mut signed_inp : Vec<TxIn> = Vec::new();

    for (i, unsigned_inp) in tx.input.iter().enumerate() {
        let script_pubkey = script_pubkeys.get(i).unwrap();
        let signing_key = skeys.get(i).unwrap().key;
        let pub_key = pkeys.get(i).unwrap();
        let sighash = tx.signature_hash(i, &script_pubkey, SIGHASH_ALL);
        let msg = Message::from_slice(&sighash.as_ref()).unwrap();
        let sig = curve.sign(&msg, &signing_key);
        // Standard P2PKH redeem script:
        // <sig> <pubkey>
        let redeem_script = Builder::new()
            .push_slice(&sig.serialize_der())
            .push_int(1)
            .push_key(&pub_key)
            .into_script();
        signed_inp.push(TxIn {
            previous_output : unsigned_inp.previous_output,
            script_sig: redeem_script,
            sequence : unsigned_inp.sequence,
            witness : unsigned_inp.witness.clone()
        });
        
    }
    
    Transaction {
        version : tx.version,
        lock_time : tx.lock_time,
        input : signed_inp,
        output: tx.output
    }
}

Here is an example transaction 01000000010b41a0a10f36c39b057053281d1d4d9f67410c40a093c351024f57af4e133c21010000006a46304402202d99eb85a14f483f4679b93eb1cf0f67b64d788dc4cd5ad782e75540508922ce0220752a9237e320497ba289a91edd7135dd6c53f9332dc33c71a3020bf0555607fc512103202430a99091407e6c724c5c88504e69ef6917f042a65cb990537922f823d9dfffffffff02e80300000000000017a9141831af16119be20b532668d5995bd05ac955153b872a091e00000000001976a9146b538e889cf0df7b69112b50db8faadb4e457b9288ac00000000

and in decoded form

{
   "result":{
      "txid":"b36457604e0ccd491e97c30c06a011fd906bb83d6fe81c676efb9fc60f17c5a5",
      "hash":"b36457604e0ccd491e97c30c06a011fd906bb83d6fe81c676efb9fc60f17c5a5",
      "version":1,
      "size":223,
      "vsize":223,
      "weight":892,
      "locktime":0,
      "vin":(
         {
            "txid":"213c134eaf574f0251c393a0400c41679f4d1d1d285370059bc3360fa1a0410b",
            "vout":1,
            "scriptSig":{
               "asm":"304402202d99eb85a14f483f4679b93eb1cf0f67b64d788dc4cd5ad782e75540508922ce0220752a9237e320497ba289a91edd7135dd6c53f9332dc33c71a3020bf0555607fc 1 03202430a99091407e6c724c5c88504e69ef6917f042a65cb990537922f823d9df",
               "hex":"46304402202d99eb85a14f483f4679b93eb1cf0f67b64d788dc4cd5ad782e75540508922ce0220752a9237e320497ba289a91edd7135dd6c53f9332dc33c71a3020bf0555607fc512103202430a99091407e6c724c5c88504e69ef6917f042a65cb990537922f823d9df"
            },
            "sequence":4294967295
         }
      ),
      "vout":(
         {
            "value":0.00001000,
            "n":0,
            "scriptPubKey":{
               "asm":"OP_HASH160 1831af16119be20b532668d5995bd05ac955153b OP_EQUAL",
               "hex":"a9141831af16119be20b532668d5995bd05ac955153b87",
               "reqSigs":1,
               "type":"scripthash",
               "addresses":(
                  "2MuT9izFXKft5umHqr1gviyhus6JGLCxGd6"
               )
            }
         },
         {
            "value":0.01968426,
            "n":1,
            "scriptPubKey":{
               "asm":"OP_DUP OP_HASH160 6b538e889cf0df7b69112b50db8faadb4e457b92 OP_EQUALVERIFY OP_CHECKSIG",
               "hex":"76a9146b538e889cf0df7b69112b50db8faadb4e457b9288ac",
               "reqSigs":1,
               "type":"pubkeyhash",
               "addresses":(
                  "mqJShCuqM7FK5zPa7Z1PNJySMHTVPrK9bb"
               )
            }
         }
      )
   },
   "error":null,
   "id":"curltest"
}

Any help on what could be wrong is much appreciated

database design – Why if a transaction unlock a data item immediately after its final access of that data item, then serializability may not be ensured

The quote below is from Silberschatz’s Database System Concepts. It says if a transaction unlock a data item immediately after its final access of that data item, serializability may not be
ensured. Can you please explain why serializability may not be ensured?

Transaction Ti may unlock a data item that it had locked at some earlier point.
Note that a transaction must hold a lock on a data item as long as it accesses that
item. Moreover, it is not necessarily desirable for a transaction to unlock a data item
immediately after its final access of that data item, since serializability may not be
ensured.

blockchain – Could a miner set a maximium transaction fee

Yes, miners can choose exactly what transactions to include in their candidate blocks, including the choice to not include anything at all.

Of course, if a high fee paying transaction is available, and one miner chooses to not include it, other miners still can. This is only untrue if an entire cartel with sufficient hashrate that is actively performing a 51% attack chooses not to include a transaction – in that case, it can be censored as long as the attack lasts.

blockstream – What is the difference between the two transaction headers on the Bitcoin Liquid sidechain network?

You can see transactions for this asset
https://blockstream.info/liquid/address/H4UWQS836njW4QJ6WfkGAPjaYtK2twLnZE

For the second transaction, it has a header of

  • 802dc4c4a08fcf4f50e4320bdb5eb596afc01c95dd0ff6afb83304aeb989be15 (2020-01-07 11:21:58 UTC)

Underneath the transaction, is a another header of

  • 81efc96bea93fcb5992831d8311dec63a0b1d17072ec2e6f1263901b1bd26000 (2019-10-31 19:05:58 UTC)

What is the difference between these two? why does one transaction have two headers?

Random Number for Transaction Signatures In R Values , how It is created?

I wonder about the random Number used in signatures that result in R values ?
xcoridnate R = G(k)
how it is created ? does it have specific chars length ?
Is there a specific Random generator Used ? or restriction Arguments ?
I saw A Python Code online and found It is limited to max number of field points
FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141
but as it is in hex format i didn’t know what does it mean
so wat is the max number to be used as it will processed as an integer

database design – Transaction was deadlocked on lock resources with another process and was chosen as the

I have table in SQL Server 2014. Please find the structure below:

CREATE TABLE (dbo).(ProductIMEISerialNoes)(
(ID) (int) IDENTITY(1,1) NOT NULL,
(IMEI1) nvarchar NULL,
(IMEI2) nvarchar NULL,
(SerialNo) nvarchar NULL,
(ProductModel_ProductID) (int) NULL,
(ProcessDate) (datetime) NOT NULL DEFAULT (‘1900-01-01T00:00:00.000’),
(BoxName) nvarchar NULL,
(BoxNo) nvarchar NULL DEFAULT (‘0’),
(Order_OrderID) (int) NULL,
(Color) nvarchar NULL,
(BoxSize) (int) NULL DEFAULT ((0)),
(IMEI3) nvarchar NULL,
(IMEI4) nvarchar NULL,
(Version) (int) NULL,
(CurrentStatus) nvarchar NULL,
(BoxStatus) (int) NULL,
(BoxId) (int) NULL,
(PrintStatus) (int) NULL,
(BoxWeight) (decimal)(7, 3) NOT NULL DEFAULT ((0.0)),
(TempBoxNo) nvarchar NOT NULL CONSTRAINT (DF_ProductIMEISerialNoes_TempBoxNo) DEFAULT (‘0’)
) ON (PRIMARY) TEXTIMAGE_ON (PRIMARY)

GO

We currently have around 8 systems that are running dotnet mvc applications that concurrently access the same database/same table for read/write. The issue is that, we very frequently get the above exception — “Transaction was deadlocked on lock resources with another process and was chosen as the…”. Could you please suggest what can be done to solve this issue or work around this issue? Maybe table restructuring, something at the sql server level? Any help will be greatly appreciated.

senddbmail – SQL Server Long Running Transaction – WAITFOR(RECEIVE conversation….DatabaseMail)

I have recently implemented an Agent Job which checks SQL Server every 10mins for any long running queries and if detected it will send out a mail to recipients with the information. However since putting this in, I notice alot of the below query and wonder if this is something I should be concerned about:

WAITFOR(RECEIVE conversation_handle, service_contract_name, message_type_name, message_body FROM ExternalMailQueue INTO @msgs), TIMEOUT @rec_timeout

Understand its from Database Mail and the wait info is (1x: 62093ms)BROKER_RECEIVE_WAITFOR but should I need to worry or simply exclude it from the alerting.

Looking at it via sp_whoisactive can see that the open_transaction count is 1 and the status is suspended.

Any help is appreciated.

blockchain – nLocketime error PUSH TRANSACTION ERROR: 64: NON-FINAL

Created nlocktime with electrum and used the time date feature which is using unix time instead of block height number for the locktime.

SO when pushing the raw tx before its time I get code 26 message non final which is fine as the time in the unix clock has not been reached.

Once the time arrived, I tried again and this time I get non-final (code 64) Why? And yes the btc from wallet being sent was confirmed at least with 1 confirmation. Here is the raw tx code. Any help would be appreciated as this is driving me crazy.

020000000156ab1d690e27b9d6dc3764233fa73a9749253472443e7d4efd86c7532eb58a64000000006a47304402205531d1b6808572a77bf649d480cb2f4676ead7035ed39bdb1bb1e928d5441a9102205710edaf122ec991ccab08a6945a1a728d08eecae5dbafb1e72c8481e1672f9d012102cbf41593c5fdb8caaf1942e0c4f2256de0a174c85b2d05252a9053dfc08de65afdffffff01a26900000000000017a91432d423f7e7722ed3ab0c87786e877b5f220148708795980460

{
“txid”: “5b0b8137b86986fe65b2d07346d7dde9cc94ac9cd20671f5161b3bc76012a995”,
“hash”: “5b0b8137b86986fe65b2d07346d7dde9cc94ac9cd20671f5161b3bc76012a995”,
“version”: 2,
“size”: 189,
“vsize”: 189,
“weight”: 756,
“locktime”: 1610914221,
“vin”: (
{
“txid”: “648ab52e53c786fd4e7d3e4472342549973aa73f236437dcd6b9270e691dab56”,
“vout”: 0,
“scriptSig”: {
“asm”: “304402203a3c80d8cb01c28558269fdda6faaa7b1b963030b8867d9c0a933b9813c192c802202770970cdb704416c8fe504a9ed518537e983d2cce534fafa02046aa2181bf7a(ALL) 02cbf41593c5fdb8caaf1942e0c4f2256de0a174c85b2d05252a9053dfc08de65a”,
“hex”: “47304402203a3c80d8cb01c28558269fdda6faaa7b1b963030b8867d9c0a933b9813c192c802202770970cdb704416c8fe504a9ed518537e983d2cce534fafa02046aa2181bf7a012102cbf41593c5fdb8caaf1942e0c4f2256de0a174c85b2d05252a9053dfc08de65a”
},
“sequence”: 4294967293
}
),
“vout”: (
{
“value”: 0.00026092,
“n”: 0,
“scriptPubKey”: {
“asm”: “OP_HASH160 32d423f7e7722ed3ab0c87786e877b5f22014870 OP_EQUAL”,
“hex”: “a91432d423f7e7722ed3ab0c87786e877b5f2201487087”,
“reqSigs”: 1,
“type”: “scripthash”,
“addresses”: (
“36Kmqv4k5cxigNvgV6Geejn7Cvc7MR3oPt”
)
}
}
)
}

Will large transaction log file slow db recovery?

Did a manual failover after windows updates to passive node today and recovery was fast for all but one db. The one slow db had a transaction log size of 203200.00 mb. Log Space percent used = 0.25. Will the large transaction log file size slow down recovery?