difference in efficiency between java int and byte

There is really a difference between an int data type and a byte type, I know it's easier to "understand" in terms of reading and writing code, but the Byte theoretically occupies less storage space, I say "in theory" because somebody once said that the Java virtual machine stored the information in spaces of 4 bytes and that the management of a byte was then identical to the management of an integer, since the remaining 3 bytes were wasted. and in terms of running time, is there really a difference?
As for the code structure that is such a good practice to do in a program, I know that doing it to save 6 or 9 bytes does not make sense, and in that case the best thing to do is to use an int since the backup is really insignificant and it is worthwhile to "sacrifice the readability of the code", but if you manage about 20,000 data, is there a noticeable difference or is it better to continue using int for convenience, which would constitute a considerable amount of data, so that it is really worth using byte and not int?

Python – How to Format a Byte Date (Pyhon)

Hello I have the following doubt, I liked to start the date as follows for example 24/05/2019 but what happens is this

print (date)
Date: 20290510113138Z & # 39;

I have the following code

cert = crypto.load_certificate (crypto.FILETYPE_PEM, open (cert_file) .read ())
date = cert.get_notAfter ()
print ("Date:", date)

I found a code that solves the problem but I do not know how to implement suggestions in my code?

datetime.strtime (cert.get_notAfter (). decode ('ascii'),% Y% m% d% H% M% SZ & # 39;)

performance – Should I improve the referencing time of the first byte or the Speed ​​Insights page for referencing?

Only two-page load speed measurements really matter:

  • Deadline for obtaining the HTML page (without resources such as images, CSS and JS)
  • Time required for the first content screen to be loaded and usable

The HTML delivery time is important because it is the factor that Googlebot sees most directly. It controls how quickly Googlebot can scan your site. The algorithmic ranking penalties applied by Google are almost all based on this metric.

The time it takes for the page to be usable is important for users. Users are turning away from a site that is not quickly usable. This has indirect consequences on SEO, because Google notices that users are not satisfied with the site.

So, how fast should your site be?

  • Google will actively penalize sites where the HTML page is not delivered within 7 seconds.
  • Users start to turn away from an unusable site within 3 seconds.

So focus on the users. They are much more difficult than Google. You have three seconds. It boils down to:

  • 1 second to get your HTML page delivered.
  • 1 second to download CSS, JS and image critical resources.
  • 1 second to allow the browser to render the page.

Since TTFB is a component of HTML page delivery, you must optimize it to be only part of the second allocated for HTML delivery. Reduce it to 200 to 500 ms.

All assets that are not needed for the page to start working must be loaded lazy. Most JavaScript scripts must be loaded asynchronously. The images under the fold should have their deferred charge.

With this in mind, the PageSpeed ​​Insights score can be completely ignored. I do not think Google uses this score directly in the rankings. The tool and the score can be useful. He can tell you what optimizations might be available to you. It may be able to help you prioritize the optimizations to perform. However, it is a mistake to pay attention to the score only. Instead, focus on creating the site quickly for users, as the main goal.

Do not forget that items beyond your control will affect your PageSpeed ​​Insights score. I have a site that completely loads its pages in 1.2 seconds and gets a 100 for its score. However, when I enable AdSense on the page, PageSpeed ​​insights indicates that the full load takes 10 seconds and the score goes to 63. This is despite the fact that the base page is fully usable after 1.2 seconds and that the ads load continues after.

Why Scapy has added the byte 'c2 & # 39; in Dot11 element information?

I just followed the steps of Forging WiFi Beacon, but my output is strange. He added a byte "c2" between "0f" and a byte "ac". Why did this happen? How to solve this problem?

Scapy release image

clojure – Use style to create a wrapper class for byte arrays

First, your calls to .ba are not solved for other, which forces us to use reflection. If you run lein check, you will see:

Reflection warning, thread_test.clj: 22: 22 - The reference to the ba field on java.lang.Object can not be resolved.

This has the potential to slow down the method, although this only happens once per call, so the effect would not be huge.

Use tips of explicit types to make sure that he knows what types you are working with, as you have done below:

(let[m (alength ^ bytes (.ba ^ bytes this))
n (alength ^ bytes (.ba ^ bytes other))

I tried typing the settings instead and I had an error that I had never had before. I think that he was looking for a compare to with Object parameters, saying that the parameters were bytes was throwing it away.

Other than that, I do not see anything related to the performance. I'll just point out, in case you do not know, this& # 39; ba is actually in the scope. You can use it directly, which eliminates a little volume, although some codes are less symmetrical:

(Bytes of type [^bytes ba]
(compare to
    [_ other]
    (let [m (alength ba) ; Here
          n (alength ^bytes (.ba ^Bytes other))
          l (min m n)]
      (loop [i 0]
        (if (<i l)
(let [a (aget ba i) ; And here
                b (aget ^bytes (.ba ^Bytes other) i)
                d (compare a b)]
            (if (zero?)
(recite (inc i))
(compare m n))))))

design – Functional approaches for serializing objects into a variable length byte array output

I have a lot of record types derived from a binary format specification. Up to now, I've already written a calculation expression generator that allows me to easily read structures from files:

type Data = {Value: ...} // Save data

let readData: ReadOnlyMemory -> Result = // This function takes a ReadOnlyMemory and maybe return a record or an encapsulated exception.
analyzer {
let's decode the bytes of the algorithm =
... // code to transform bytes
let! algorithm = readUInt32LE 0 // The value of the algorithm of the first 4 bytes in an unclear order
let! length = readUInt32LE 4 // Length of bytes to read for the value
if length> 0 then
let! value = readBytes 8 248> => decoding algorithm // The actual data described by bytes
return {Value = value}

The nice thing about this approach is that I can easily convert the format specification tables stored on a worksheet to create parsers as F # computational expressions for each type of record defined, as well as additional code for the validation logic (as above). Much of the mess of matches and conditional statements disappears with the help of computer expressions and I can take advantage of the imperative style code with the brevity of the F # syntax. (Notice my if statement does not have matching other statement in the code above.)

However, I do not know how to do the best for the reverseTake records and serialize them in bytes. As in the example above, the representation in bytes can vary in length. There are also other considerations that a writer must be aware of:

  • Variable length: the representation in bytes is not necessarily fixed length, although many are.
  • The context: the byte representation of certain types changes according to where they are written, the type of parent they are pointing to, and sometimes even bytes to the top in front of. (I have a type where the encoder must process all the bytes, then go back to the first byte position and write the algorithm identifier so that the resulting byte sequences are not always written sequentially.)
    • Order: Some discs have a concept of pointers on parents, children or siblings, so the order of writing is also important.
  • Cut: the resulting file sizes range from a megabyte to hundreds of gigabytes.

I thought about it quickly and proposed the following:

  • A computational expression generator that caches all write operations and returns a newly initialized byte / memory array once the length of the final byte representation is known :

    let encode algorithm bytes = // This is set outside the calculation expression because the expression is
    let serialize the data context =
    serializer {
    let algorithm = if context ...
    then ...
    other ...
    make! writeUInt32LE 0 algorithm
    let length = if algorithm ...
    then ...
    other ...
    make! writeUInt32LE 4 length
    make! writeBytesTo 8 <= <encoding algorithm <| Value of the data
    returns Array.zeroCreate <| size of + size of + length
  • An optimized version of the above for serializations with a known fixed size or small upper limit.

I've implemented the above with some work results, but on reflection, the resulting computational expressions are not very intuitive; the return statement at the very end creates the buffer that the previous make! the statements write to. And the type of generator for the calculation expression also does a lot of extra work for it to work.

Something tells me that I'm wrong here. If I wanted to use code with a high signal-to-noise ratio without significantly impacting clarity or performance, what better way?

Slow loading web server, even HTML pages (10 seconds to first byte)

Hello, my apache web server 2.x with suphp started loading all the websites slowly (before a few days, it was ok and I made no configuration changes). For example, even a .html page loads about 10 seconds and when the page contains an image, I wait 10 seconds longer for it to appear. So the problem is the first-byte time (TTFB) of my Apache web server.
When I restart it, it shows …

Slow loading web server, even HTML pages (10 seconds to first byte)

linear algebra – Transform a byte with a subset of a small set of fixed values ​​and xor into another value

If I have a collection of bits, an octet, for example, of arbitrary value, I can transform it into another value by means of a subset of eight fixed values ​​(in this case) affecting each one. a single bit 0x80, 0x40, … 0x01: Each possible value is represented by a combination of operations from any starting point.

But, clearly, there is other sets of eight values ​​that succeed. For example, 0b1xxxxxxx, 0b01xxxxxx, 0b001xxxxx, 0b0001xxxx, 0b00001xxx, 0b000001xx, 0b001001x, 0b00000001 clearly satisfy these criteria and constitute a relatively large set of these values ​​(because x values ​​can be distinct). (Proof: yse the first value to get the correct msb then the second to get the correct second msb, etc.).

However, some sets make it clear do not Having this property, for example, for the three bits 0b101, 0b110, and 0b011, is clearly not, because each operation preserves the parity.

I am interested in various aspects of the property above from a practical point of view for a numerical computation model. I am not a theoretical computer scientist, but I have studied these things over the last century. I am therefore vaguely aware that what I need to watch can be groups, Galois fields, LFSRs, etc.

Could someone steer me in the right direction so that my learning can be more oriented than the random bustle of textbooks and Wikipedia? Some pointers on the names of the relevant fields of study, the major theorems / problems / categories and their names, the useful techniques, etc., would be appropriate.

image processing – How to create a 32-bit red texture byte buffer

I solved the problem by doing the following:

                CImage m_cImage;
// create a test image
m_cImage.Create (w, -h, 8 * 4); // 8 bpp * 4 channels
auto hdc = m_cImage.GetDC ();
Gdiplus :: Graphics graphics (hdc);

// Create a SolidBrush object.
Gdiplus :: SolidBrush redBrush (Gdiplus :: Color :: Red);

// Fill in the rectangle.
Gdiplus :: Status status = graphics.FillRectangle (& redBrush, 0, 0, w, h);
TRY_CONDITION (status == Gdiplus :: Status :: Ok);
// Then save the m_cImage.GetBits () to a bmp file with Gdiplus :: Bitmap
// and my expected texture is found

Create a 32-bit red texture byte buffer

I asked the question in stackoverflow. I wish someone help me to understand it:

Question asked here