conversion – Java: stores temperature values ​​in a single byte

I have the following code to compress and decompress the temperature values ​​into one byte and from one byte:

  public static int packFromFloat(float t) {
    return (int) Math.round(((t - 16.0f) * 255.0f) / 16.0f);

  public static float unpackToFloat(int a) {
    return (Math.round(((float) a * 16.0f) / 25.5f) / 10.0f) + 16.0f;

  public static void main(String() args) {
    for (int i = 150; i < 330; i++) {
      char a = (char) (packFromFloat(i / 10f) & 0xFF); // unsigned byte
      float b = unpackToFloat(a);
      System.out.println(i / 10f + " -> " + (int) a + " -> " + b + " -> " + (i / 10f == b));

The problems are temperatures below 16.0 degrees Celsius and above 32.0 degrees Celsius. How can I solve this?

Average TTFB (Time To First Byte) for a shared reseller?

Hi all,

My host has recently moved all resellers to its new high performance cloud infrastructure. At least that's how it was announced to us.

Since then, some of my most serious clients have complained that the Time To First Byte (TTFB) website was much longer than 10 seconds when testing their site with GTMetrix.

This ultimately harms their SEO because, as we know, Google penalizes slow sites. (Search it on Google)

So, when I checked on my side, he was absolutely right. The TTFB was atrocious! I immediately contacted my host to inform him and they quickly started to solve problems and solve problems.

Since I reported back to them, they worked closely with the vendor to make configuration changes, migrate hypervisors, restart the cloud infrastructure, resynchronize disks, and so on.

All this work has improved the TTFB to 3 seconds but I ALWAYS feel that it is not good at all.

I've compared the TTFB to other hosts with which I also have accounts and their TTFB is less than 200ms.

I've researched this and all the content I've read indicates that an optimal TTFB should be 200 ms and below what I see on my other providers. ;accommodation. (Another Google It)

I sent follow-up emails to my host and I have not heard from them yet. It's been almost 3 weeks now and the problem persists.

Needless to say, my clients are not happy at all and they plan to leave. How can I stop this? I can not because in my opinion they have a valid reason.

My question is: what is your average TTFB on your reseller host? I would like to get an idea of ​​what others are experiencing and what the average TTFB should be on shared reseller servers.

Thank you

Memory Access – Addressable Byte vs. Addressable Word

I'm trying to understand the difference between addressing bytes and addressing on words.

A 4-channel associative cache memory unit with a capacity of 16 KB is built using an 8-word block size. The word length is 32 bits. The size of the physical addressing space is 4 GB.

Number of sets in the cache $ = (16 * 1024) / (4 * 8 * 4) = 2 ^ 7 $

If the word addressing is used:

Block offset $ = 3 bits $

Since PAS is $ 4 GB $, total number of addresses = $ 2 ^ {32} / 2 ^ 2 = 2 ^ {30} $

So total address bits $ = 30 bits $

Address structure:

Bits of label: $ 20 bits $
Set bits: $ 7 bits $
Block shift bits: $ 3 bits $

Suppose now that the CPU wants to access the 3rd byte of a particular word.

  1. The cache controller will use the $ 7 bits $ set-field to index in a set then compare the highest $ 20 bits $ tag-field with all the $ 4 $ blocks in the set. If a match is found, then the cache reaches and the lowest $ 3 bits $ offset block to put one of the words of the $ 8 $ words in one of the general purpose registers. The CPU then extracts the 3rd byte of the word and performs the operation.
  2. If the tags do not match, a cache cache occurs, a read-in-memory signal is sent and, because of the reference spatial locality, a block containing the word is transferred to the cache memory.

If the CPU is an addressable byte:

Total bit address $ = $ 32

Address structure:
Bits of label: $ 20 bits $
Set bits: $ 7 bits $
Block shift bits: $ 5 bits $

If the CPU wants to access the 3rd byte of a word:

  1. Same as step 1 of the addressable word, but the CPU can now directly address the 3rd byte of the word, using the $ 2 bits $ offset byte. However, I am confused how that would happen. Since the size of the CPU register has a width of 1 word, similar to the word addressing, a word on the 8 words of the block will be transferred to the register. But how would the step of extracting bytes be easier here? And why do we call it bytes addressed if we always address a word?
  2. Same as step 2 of addressing words. The block of data will be transferred from the memory to the cache if there is no cache.

In addition, this response indicates that the physical memory is always addressable by byte. Now, what is the difference between the addressability of the memory and that of the architecture of the CPU?

Android art music album has stopped working, all illustrations are 0 byte files

On my Sony Xperia Android Pie device, I can no longer load album art from embedded tags in FLAC files. (It worked, but stopped at one point.) Has anyone ever met him? I have tried:

  • erasing data / application cache Music, Multimedia Storage application and external storage application

  • dismantling and reassembling the SD card

  • copy files to the device

  • change the tag "Album" to something different and copy the file to the device

But each new music file generates only a 0-byte file in Android> data>> albumthumbs. (On other Android devices, I know the files are not supposed to be empty.) Any help is welcome to avoid having to perform a factory reset.

difference in efficiency between java int and byte

There is really a difference between an int data type and a byte type, I know it's easier to "understand" in terms of reading and writing code, but the Byte theoretically occupies less storage space, I say "in theory" because somebody once said that the Java virtual machine stored the information in spaces of 4 bytes and that the management of a byte was then identical to the management of an integer, since the remaining 3 bytes were wasted. and in terms of running time, is there really a difference?
As for the code structure that is such a good practice to do in a program, I know that doing it to save 6 or 9 bytes does not make sense, and in that case the best thing to do is to use an int since the backup is really insignificant and it is worthwhile to "sacrifice the readability of the code", but if you manage about 20,000 data, is there a noticeable difference or is it better to continue using int for convenience, which would constitute a considerable amount of data, so that it is really worth using byte and not int?

Python – How to Format a Byte Date (Pyhon)

Hello I have the following doubt, I liked to start the date as follows for example 24/05/2019 but what happens is this

print (date)
Date: 20290510113138Z & # 39;

I have the following code

cert = crypto.load_certificate (crypto.FILETYPE_PEM, open (cert_file) .read ())
date = cert.get_notAfter ()
print ("Date:", date)

I found a code that solves the problem but I do not know how to implement suggestions in my code?

datetime.strtime (cert.get_notAfter (). decode ('ascii'),% Y% m% d% H% M% SZ & # 39;)

performance – Should I improve the referencing time of the first byte or the Speed ​​Insights page for referencing?

Only two-page load speed measurements really matter:

  • Deadline for obtaining the HTML page (without resources such as images, CSS and JS)
  • Time required for the first content screen to be loaded and usable

The HTML delivery time is important because it is the factor that Googlebot sees most directly. It controls how quickly Googlebot can scan your site. The algorithmic ranking penalties applied by Google are almost all based on this metric.

The time it takes for the page to be usable is important for users. Users are turning away from a site that is not quickly usable. This has indirect consequences on SEO, because Google notices that users are not satisfied with the site.

So, how fast should your site be?

  • Google will actively penalize sites where the HTML page is not delivered within 7 seconds.
  • Users start to turn away from an unusable site within 3 seconds.

So focus on the users. They are much more difficult than Google. You have three seconds. It boils down to:

  • 1 second to get your HTML page delivered.
  • 1 second to download CSS, JS and image critical resources.
  • 1 second to allow the browser to render the page.

Since TTFB is a component of HTML page delivery, you must optimize it to be only part of the second allocated for HTML delivery. Reduce it to 200 to 500 ms.

All assets that are not needed for the page to start working must be loaded lazy. Most JavaScript scripts must be loaded asynchronously. The images under the fold should have their deferred charge.

With this in mind, the PageSpeed ​​Insights score can be completely ignored. I do not think Google uses this score directly in the rankings. The tool and the score can be useful. He can tell you what optimizations might be available to you. It may be able to help you prioritize the optimizations to perform. However, it is a mistake to pay attention to the score only. Instead, focus on creating the site quickly for users, as the main goal.

Do not forget that items beyond your control will affect your PageSpeed ​​Insights score. I have a site that completely loads its pages in 1.2 seconds and gets a 100 for its score. However, when I enable AdSense on the page, PageSpeed ​​insights indicates that the full load takes 10 seconds and the score goes to 63. This is despite the fact that the base page is fully usable after 1.2 seconds and that the ads load continues after.

Why Scapy has added the byte 'c2 & # 39; in Dot11 element information?

I just followed the steps of Forging WiFi Beacon, but my output is strange. He added a byte "c2" between "0f" and a byte "ac". Why did this happen? How to solve this problem?

Scapy release image

clojure – Use style to create a wrapper class for byte arrays

First, your calls to .ba are not solved for other, which forces us to use reflection. If you run lein check, you will see:

Reflection warning, thread_test.clj: 22: 22 - The reference to the ba field on java.lang.Object can not be resolved.

This has the potential to slow down the method, although this only happens once per call, so the effect would not be huge.

Use tips of explicit types to make sure that he knows what types you are working with, as you have done below:

(let[m (alength ^ bytes (.ba ^ bytes this))
n (alength ^ bytes (.ba ^ bytes other))

I tried typing the settings instead and I had an error that I had never had before. I think that he was looking for a compare to with Object parameters, saying that the parameters were bytes was throwing it away.

Other than that, I do not see anything related to the performance. I'll just point out, in case you do not know, this& # 39; ba is actually in the scope. You can use it directly, which eliminates a little volume, although some codes are less symmetrical:

(Bytes of type [^bytes ba]
(compare to
    [_ other]
    (let [m (alength ba) ; Here
          n (alength ^bytes (.ba ^Bytes other))
          l (min m n)]
      (loop [i 0]
        (if (<i l)
(let [a (aget ba i) ; And here
                b (aget ^bytes (.ba ^Bytes other) i)
                d (compare a b)]
            (if (zero?)
(recite (inc i))
(compare m n))))))

design – Functional approaches for serializing objects into a variable length byte array output

I have a lot of record types derived from a binary format specification. Up to now, I've already written a calculation expression generator that allows me to easily read structures from files:

type Data = {Value: ...} // Save data

let readData: ReadOnlyMemory -> Result = // This function takes a ReadOnlyMemory and maybe return a record or an encapsulated exception.
analyzer {
let's decode the bytes of the algorithm =
... // code to transform bytes
let! algorithm = readUInt32LE 0 // The value of the algorithm of the first 4 bytes in an unclear order
let! length = readUInt32LE 4 // Length of bytes to read for the value
if length> 0 then
let! value = readBytes 8 248> => decoding algorithm // The actual data described by bytes
return {Value = value}

The nice thing about this approach is that I can easily convert the format specification tables stored on a worksheet to create parsers as F # computational expressions for each type of record defined, as well as additional code for the validation logic (as above). Much of the mess of matches and conditional statements disappears with the help of computer expressions and I can take advantage of the imperative style code with the brevity of the F # syntax. (Notice my if statement does not have matching other statement in the code above.)

However, I do not know how to do the best for the reverseTake records and serialize them in bytes. As in the example above, the representation in bytes can vary in length. There are also other considerations that a writer must be aware of:

  • Variable length: the representation in bytes is not necessarily fixed length, although many are.
  • The context: the byte representation of certain types changes according to where they are written, the type of parent they are pointing to, and sometimes even bytes to the top in front of. (I have a type where the encoder must process all the bytes, then go back to the first byte position and write the algorithm identifier so that the resulting byte sequences are not always written sequentially.)
    • Order: Some discs have a concept of pointers on parents, children or siblings, so the order of writing is also important.
  • Cut: the resulting file sizes range from a megabyte to hundreds of gigabytes.

I thought about it quickly and proposed the following:

  • A computational expression generator that caches all write operations and returns a newly initialized byte / memory array once the length of the final byte representation is known :

    let encode algorithm bytes = // This is set outside the calculation expression because the expression is
    let serialize the data context =
    serializer {
    let algorithm = if context ...
    then ...
    other ...
    make! writeUInt32LE 0 algorithm
    let length = if algorithm ...
    then ...
    other ...
    make! writeUInt32LE 4 length
    make! writeBytesTo 8 <= <encoding algorithm <| Value of the data
    returns Array.zeroCreate <| size of + size of + length
  • An optimized version of the above for serializations with a known fixed size or small upper limit.

I've implemented the above with some work results, but on reflection, the resulting computational expressions are not very intuitive; the return statement at the very end creates the buffer that the previous make! the statements write to. And the type of generator for the calculation expression also does a lot of extra work for it to work.

Something tells me that I'm wrong here. If I wanted to use code with a high signal-to-noise ratio without significantly impacting clarity or performance, what better way?