googlebot – Mobile usability issues reported by Google at Search Console

Google search console reported that some pages have a Mobile Usability issues:

  • Clickable elements too close together
  • Content wider than screen
  • Text too small to read

Some details:

  • this is not about restrictions in robots.txt
  • some failed pages have less than 8 resources with 550KB size total
  • the failed pages make up a small part of the total and are random
  • “LIVE TEST” for the same URL may fail randomly
  • there is no network problems (packet loss/response time/DNS) with the server where site deployed
  • this issue appeared since April 14th
  • most of the failed pages have 90-99% mobile performance in PageSpeed Insights or Lighthouse

Lighthouse embedded in the Chrome and PageSpeed Insights haven’t found a problem in mobile mode in the same pages that have an issues in Google search console.

But when I use “LIVE TEST” for failed URLs in Google search console, sometimes I get a same failed result with Mobile Usability issues, where CSS file not loaded with a reason “Other error”. I think this is an issues reason but I don’t understand what can I do to fix it, especially if it is due to some netiquette limitation (crawl budget)

tls – Compression and Encryption against security issues

I’m having a hard time knowing whether the following setup is vulnerable to CRIME/BREACH type attacks (which target HTTPS).

I am running a Wireguard VPN that tunnels VXLAN protocol, using ChachaPoly20 encryption.
I would like to add CPU cheap compression (LZ4) on the VXLAN frames (RFC3173 likewise).

Would the fact that I add LZ4 on my VXLAN frames make the encrypted VPN tunnel vulnerable to a potential attacker?

Side question: Since CRIME and BREACH target HTTPS specifically, are there any more generic versions of those attacks?

magento2.4 – CPU issues after upgrading from Magento 2.2.4 to Magento 2.4

We recently upgraded our site from Magento 2.2.4 to Magento 2.4.

Done the upgrade on a copy of our site on a test server and everything was fine.

When we upgrade on our live server, the page load time increased dramatically when 4+ people were on the site at the same time. This also crashed EleasticSearch, so we moved that to it’s own VPS and ES works fine now. Before the upgrade, 15-20+ users online at the one time wouldn’t have been uncommon and the server handled it fine.

With 2.2.4, we had a VPS with 6GB RAM, 4 CPU’s. Our hosting provider suggested we increase this when we ran into issues after the upgrade. We’re now at 8CPU’s, 12GB RAM and although that improved performance, load times from server were very long.

We now have Varnish running on a separate VPS and while that has sped up load times, it’s still not good enough. Varnish has been running for the last 24 hours and we’ve been getting 503 and 504 errors and our Magento developer has told me these are due to Varnish waiting so long for our Magento server to respond.

Our hosting company is now telling us we need to get a dedicated server, is this necessary? Our Magento developer has said our VPS, if there’s no issues with the server, should be fine. Our hosting company is telling us that the server is fine.

We’re unsure what to do as we haven’t much confidence in our hosting company as they have just been telling us to increase our package, without really investigating why we’re having these issues.

network – Macbook 2007 (A1181) Wi-Fi issues in Windows

I have:

  • Macbook A1181
  • Windows 8.1 in Bootcamp with the latest patches
  • All bootcamp drivers installed
  • Keenetic Viva router with 2.4/5 GHz dual-band Wi-Fi, located in ~1m from the Macbook

The issue is that download/upload speed via Wi-Fi is very low in any app (~60kbps) and often interrupts. On the other hand, ethernet connection works great.

And the same Macbook works great with Wi-Fi when I boot macOS, but, unfortunately, I need Windows.

Other devices, like M1 Macbook Pro (late 2020), iPhones and lots of Android phones (even 2.4 GHz) work fine too, so it seems to be a driver issue.

Any ideas on how to fix it?

long exposure – Issues with dark frame subtraction: Dark frames adding “noise” and changing image color/tint

While editing some landscape shots with stars, I tried to use darkframes to reduce the noise.
More precisely, my approach was to take a series of shots, then firstly to subtract dark frames from each shot, secondly to use the mean of the series for the foreground to further reduce noise, and thirdly to use an astro stacking tool (Sequator) to stack the sky.

Instead of reducing noise, the darkframe subtraction:

  1. increased the noise- or rather, added some dark/monochrome noise.
  2. changed the white-balance/tinted the image.
    (see below)
    I do not understand why this is happening/What I am doing wrong.

Procedure/Employed Troubleshooting:

  • All photos were shot in succession, with the same settings (15sec, @ISO6400, in-camera dark frame disabled).

  • All photos were shot with the same white balance.

  • While shooting the darkframes, both the lens cap and the viewfinder cover were applied.

  • Photos were imported from my Pentax K1ii, converted to DNG in LR, and exported to PS without any editing/import presets applied.

  • I used PS, placed the darkframe layer(s) above my picture, and used the subtract blending mode.

  • I followed basic instructions found here/in various videos on dark frame subtraction in photoshop. Note that basically, all of those cover dark frame subtraction with one frame (or use tools other than photoshop). I have tried both using one, and 3 frames. The results are similar, albeit more pronounced with 3.

  • I used the free tool “sequator” to subtract darkframes instead (and to align stars). Adding the dark frames here made absolutely no difference.

  • (This is an edit/composite done with the frames I tried to subtract darkframes of)

  • A crop of the first picture, with (3) dark frames subtracted:
    with (3) dark frames

  • A crop of the second picture, without dark frames subtracted:
    without dark frames

Issues with Apache HTTPD 2.4 LocationMatch containint?

I am having trouble using a LocationMatch for a specific site that contains a ?

My current LocationMatch is
<LocationMatch “^/SOME/FOLDER/STRUCTURE/TEST/?cmd=logout”>

The actual URL contains the ?, but I am having trouble getting this specific locationmatch to work.

The error that I get is
AH01630: client denied by server configuration: /etc/httpd/htdocs, referer: https://URL/SOME/FOLDER/STRUCTURE/TEST/?cmd=logout

ip address – eCom “licensing issues” so removing content – good idea? Any prevention?

My client has a large eCom site and due to licensing issues they can’t sell in specific geographical IP ranges. They currently just remove the products so the page is basically blank.

Can anyone think of a “best practice” workaround?

Rather than zero content, perhaps we should just publish a bunch of keyword content – right?

Does any workaround come to mind? Thanks for all comments…

networking – Debugging Linux/Java -> Redis performance issues

I have an application that currently has an in-memory cache and, of course, the performance is blazing fast. But due to some reasons (that are out of scope here) I want to start using Redis but the performance has dropped drastically.

I have a couple dozen instances, each running nginx and a Java app. /proc/sys/net/ipv4/ip_local_port_range has been bumped to 10240 65535 so that there are 64,000 ports available for nginx to talk to the app. With the in-memory cache, the environment can easily support some 10,000 RPS and the app would be utilizing some 45-50% of the CPU. With Redis, I’m only getting 2500 RPS and the CPU utilization isn’t crossing even 10%. The Redis cluster is already configured to support tens of thousands of clients but my app isn’t actually making more than a couple hundred connections total — it’s making approximately 8 connections per instance.

How do I go about debugging this? Is there an OS-level knob that I should twist for it to make more connections to Redis? System-level debugging isn’t my strong suit and I would appreciate any help. If it makes any difference, I am using the Jedis driver.

C++: Is a pointer to a vector going to cause memory issues?

I started to write a function which had a pointer to a vector as a parameter so that it could modify that vector to output results (as the actual return value was an error code), when I started to think about the memory behind that.

For example, if I have the following code:

std::vector<int> *vect = new std::vector<int>();

for (uint32_t i = 0; i < 10; i++)
{
    std::cout << "Ptr: " << vect << " Size " << vect->size() <<  " Max Size " << vect->capacity();
    vect->push_back(i);
    std::cout << " elements 0: " << (*vect)(0) << ", " << i << " :" << (*vect)(i) << std::endl; 
}

And I run it, I get the following output:

Ptr: 0x557c393f9e70 Size 0 Max Size 0 elements 0: 0, 0 :0
Ptr: 0x557c393f9e70 Size 1 Max Size 1 elements 0: 0, 1 :1
Ptr: 0x557c393f9e70 Size 2 Max Size 2 elements 0: 0, 2 :2
Ptr: 0x557c393f9e70 Size 3 Max Size 4 elements 0: 0, 3 :3
Ptr: 0x557c393f9e70 Size 4 Max Size 4 elements 0: 0, 4 :4
Ptr: 0x557c393f9e70 Size 5 Max Size 8 elements 0: 0, 5 :5
Ptr: 0x557c393f9e70 Size 6 Max Size 8 elements 0: 0, 6 :6
Ptr: 0x557c393f9e70 Size 7 Max Size 8 elements 0: 0, 7 :7
Ptr: 0x557c393f9e70 Size 8 Max Size 8 elements 0: 0, 8 :8
Ptr: 0x557c393f9e70 Size 9 Max Size 16 elements 0: 0, 9 :9

It seems as though this could cause major memory issues – because if the vector needs to expand, it could be writing into space which is already being utilized, because it looks like that pointer does not change. Even running this over a much larger loop, this pointer looks like it is constant.

I’m still a (relatively) new programmer, and am not sure that I have the grasp on memory allocation that I would like to. Is my understanding correct – will this cause buffer errors and overwrite adjacent memory? Or is there some protection in std::vector that I am not considering?

top level domains – Internal g suite email issues using a new tld

I’m having issues with our G Suite email services since we switched from .com to .fun. I found articles from Google about new TLD’s being ok for search but nothing about using that tld for an email address. The issues are quirky; filling out forms online and the email address being labelled invalid, etc. Is this a tld issue or how we set up; aka SPF, DKIM settings.