iot – What types of attacks could be made on the CoAP protocol?

I'm a student and I'm studying the security of the CoAP protocol. So, thinking about the attack surface, my thinking was about internal attacks (i.e. inside the network) and external attack (this ; i.e. outside the network). As for them, given the scenario without encryption (i.e. without DTLS), there could be attacks like packet sniffing. So I was also wondering what kind of other attacks could be made?

USB DMA and SRAM device or USB protocol

USB devices have SRAM and DMA submodules:

  1. what is the role of SRAM in a USB device ??
  2. a. How does the DMA controller know how to initiate a request in memory (read or write) ??
    b. How does Endpoint get the data from the buffers and how does Endpoint write the data to the buffer?
    vs. who will call the DMA controller to initiate the request to the memory device?

please clarify the queries listed above.

A good reference to proof of the reliability of a simple blockchain-based protocol?

I would like to learn how to prove the reliability of a blockchain-based protocol, so I'm looking for a simple example. More specifically, I would like to know which formal system is suitable for the job and how to apply it for the properties of the blockchain.

Taken from the Wikipedia article on BAN logic:

In some cases, a protocol was found to be safe by BAN analysis, but was actually not safe. This has led to the abandonment of the BAN family logics in favor of proof methods based on standard invariance reasoning.

Does anyone know on what "proof methods based on standard invariance reasoning" the author writes?

Microsoft Server Message Block 3.1.1 Protocol Vulnerability (SMBv3)

Microsoft has released instructions to disable Server Message Block 3.1.1 (SMBv3)

Computer Networks – Sending Endless Frames in the One-Bit Sliding Window Protocol?

I am reading Computer networks by Andrew S. Tanenbaum and I wonder if there is an error in the protocol because I can't find any solution for the following scenario.

Suppose that the transmitter (A) has only one frame (X) to send and that the receiver (B) has nothing to send. B receives X and sends a frame with an empty information field and with an acknowledgment from X to A. But with A, this triggers the frame_arrival event, which in turn triggers A to send a " acknowledgment of receipt "to B, still without any packages (as there was only one). And so that the ping-pong of totally useless images continues again and again. Is this an error in the protocol, in the pseudocode of the protocol, or am I wrong?

I guess this sequence number frame_expected of A will not agree with the second recognition of B.

One-bit sliding protocol

protocol buffers – Choice of serialization frameworks

I was reading the cons of using java serialization and the need to opt for serialization frameworks. There are so many frames like avro, parquet, thrift, protobuff.

The question is which framework addresses what are and what are all the parameters to take into account when choosing a serialization framework.

I would like to familiarize myself with a practical use case and compare / choose the serialization frameworks according to the requirements.

Can anyone help me on this topic?

cryptography – Is there a decentralized 2-party consensus protocol?

Suppose there are 2 people in a conversation. Everyone can support a given position on a problem (the support is binary, either agreement = 1, or disagreement = 0). The two want to find more like-minded individuals, without revealing their (potentially incriminating) position to someone who doesn't support it. We can assume that neither of them wants to misrepresent themselves, so they will not lie about their position, but if not, both will try to find out the position of their consideration without giving theirs.

In technical terms:
2 parts (called A, B), each knowing 1 boolean value (v_A, v_B), want to calculate a shared value v_A AND v_B, without revealing the underlying values.
By the properties of AND, it is inevitable that if v_A is right, A is able to deduce v_B of the result, but otherwise (v_A is wrong), this is not possible (as requested).

In a centralized scenario, this could be easily resolved by both parties providing the values ​​to a trusted third party, which calculates the result and returns it to both parties, but TTP is not always available.

Is it possible to build a protocol that achieves this goal without third parties, under the constraints mentioned at the end of the first paragraph?

I know undergraduate math fairly well, with some basics on homomorphic encryption, signature algorithms, and some articles on proofs of zero knowledge. Feel free to leave hints on more literature, I will be happy to learn for myself what is needed.

PS: I am aware that the practical cryptographic use of such a protocol is very limited, since none of the parties can be obliged to provide a correct value (and always provide 1 leads to the discovery of the ; other value), and trying to incriminate them by publishing it, even if it is undeniable, would reveal your own support.

protocol – Incentive for miners to continue mining once the reward is very low

it will be less and less rewarding to continue adding hashing power for miners to protect the network and more and more tempting to try a 51% attack, if only to break confidence in the network.

This is a fairly weak assumption. We do not call for necessity in the growing Bitcoin hashrate, there is no security assumption around that, it is just an observed property that is currently growing.

Bitcoin miners are supposed to work with one goal: profit. If, in one way or another, to derail the thing that gives you profit is adventurous to you, we can simply rely easily on the knowledge that the basis underlying existence of Bitcoin is incorrect.

The incentive to play the game fairly and continue to operate is quite high, but what will happen once this reward is very low like 1 bitcoin or even less, even if the price gets big like> 100,000 dollars will be less and less rewarding

If the cost of operating the hashrate exceeds the income generated, the miners do not mine. The whole system is left with a barely profitable balance of mining equipment, no matter what profit can be made by doing it. If the expense volumes do not ultimately cover this, there will be less extraction.

forever keep a sufficient amount of machines ready to be turned on at any sign of an attempted attack of 51%

You assume incorrectly. There are no large farms of inactive material just lying around. Large installations require a significant amount of maintenance, large quantities of energy are not instantly available, and if there were large quantities of obsolete equipment ready for use, it would probably be so ineffective that it is worth nothing for any reason.

so what could be the solution to protect the network

Failure of incentives means complete failure of Bitcoin as a whole, to pretend otherwise or to pretend that there is a secret ball that has not already been deployed, is misleading .

schnorr signatures – Could Taproot create greater security risks or even hinder future protocol adjustments regarding quantum threats?

Let me start by addressing the misconceptions in the messages you quote.

DSAs (and Schnorr) are intrinsically based on the problem of the discrete logarithm, which is vulnerable to quantum computers (sufficiently powerful). As a result, there is no "post-quantum DSA". DSA also does not have the linearity property of Schnorr – if "linear DSA" means something, it would be just a strange way of referring to Schnorr (DSA is a modification of Schnorr which was intended to circumvent its patent ). There are digital signature schemes that are (presumably) secure post-quantum, but they are not based on the discrete logarithm problem, and they are generally very important.

Another misconception is that Taproot relies on the linearity of Schnorr. This is not the case – Taproot could also be built using ECDSA; it would just be much less useful. The linearity property is necessary for Easy key aggregation, so that a single key can represent the consent of multiple parties.

So, does starting to rely on Schnorr's linearity make migration to PQC signatures more difficult?

Linearity is just used as a tool to increase the confidentiality (and efficiency) of the Bitcoin scripting system without changing much. However, linear signatures are not the only way to achieve these abstract goals. A PQC replacement would not shoehorn Schnorr properties in another signature scheme – it would just be built for private multisignatures (or more) in the first place. If such a system cannot be found, we would simply lose these privacy benefits (the efficiency gains would not usually translate anyway), and would not be worse off than if we did not. had never adopted Schnorr / Taproot in the first place.

There are, however, obstacles to this that are unrelated to Schnorr / Taproot. Perhaps the most important is the way key derivation works today. PCQ signature schemes are nothing like BIP32, and it is not trivial to keep much of the infrastructure that exists today around key generation ( xpubs, diversion paths, PSBT, hardware portfolios, …). I suspect it will be a much more difficult problem to solve than the constructs that Taproot allows script by script.

Ideally, how would all of this functionality be transferred to PQC systems?

Before answering that, let me point out that in many ways Schnorr / Taproot are just one step towards hiding certain things in scripts. It only brings benefits when the results can be spent by one party or cooperatively by several. In an ideal world, they would be replaced by proof of zero knowledge that all the properties with which the receiver of a room wanted to clutter their storage were satisfied during their use, without revealing anything else.

Once you look at the problem this way, it becomes clear that what we need is actually proof of zero knowledge, not a signature. A signature is limited to a single party proving something to a single verifier, who knows what he wants. This is not what we need: usually, multiple parties are involved, and the verifiers (full nodes that apply the consensus rules) don't really care about the policy that has been fulfilled – only that it matches to the policy defined by the owner of the room.

Today there are zero knowledge systems that could do this (to a greater or lesser extent), although they involve performance / size tradeoffs or security assumptions that can be difficult to adopt for the ecosystem right now. However, this area of ​​science has come a long way in the past few years and I think it will continue to do so.

Back at PQC, some of these zero knowledge proof schemes can be made PQC. Like PQC signature schemes, they are generally large (even more than signatures), but improvements are underway. For QC, we are talking about events that will probably take place in decades (or that will not happen at all), and a lot can happen in such a short time.

recommended equipment – Best camera and protocol for a CNN project embedded in real time

It's a bit related to photography, so I hope it fits this forum.

I am looking to develop a CNN standalone real-time outdoor imaging application, and I cannot be interested in the myriad of camera options and their communication protocols.
The target is a Linux embedded device (such as beaglebone + edge TPU). and the main language is python. (but if it is unavoidable, I can write a c ++ driver)

I have done a number of projects with webcams with OpenCV. However, this project requires a more serious camera, with a motorized zoom and autofocus functions,
with at least 3MP and 30fps.

  1. There are many blocks of Chinese IP cameras with zoom functions.
    but they are broadcasting H.265, and I don't know how OpenCV could interact with that.
    moreover, it is not clear how to control the zoom function via python in one of them.
  2. Then there are USB2 camera blocks, they seem to be of inferior but sufficient quality, but I haven't found one with motorized zoom.
  3. Then there are the cameras & # 39; Gige & # 39; and & # 39; USB3 Vision & # 39;, they seem optimal but are prohibitively expensive ($ 500 +).

It's kind of an open question, but I haven't found a lot of information online on this topic, so I hope to find wisdom here.

So, I was wondering if anyone had any advice or recommendations on this.