firewall – Is it a bad idea to open all IPv6 ports for devices on an isolated guest network?

At home I have a dual stack IPv4 / IPv6 broadband connection and I also have a wireless access point. The access point currently connects all traffic to my local network, which is by no means segmented, so all visitors who use my wireless network have my entire local network .

While I certainly don't doubt my friends' good intentions, I see the possibility that their smartphones will be compromised, and I would prefer not to have compromised devices in my private LAN if I can help it. This, and also the fact that being in my private LAN does not provide any benefit to my friends, makes me want to create a separate wireless guest network, which I would then use with my own smartphone.

I am currently considering opening all ports for incoming IPv6 TCP and UDP traffic for devices on this separate guest network.

My reason for this is the significantly improved service reliability. As a practical example, I use the XMPP Conversations conversation app which supports sharing, for example images, but it doesn't work very well as much as myself as the ; other people are in our respective local area networks, probably because none of us have open (IPv6) or forwarded (IPv4) ports for our smartphones.

Just to verify this assumption, I opened all the IPv6 ports for my smartphone only. And voila, image sharing has worked perfectly since.

The general implications of opening a router's IPv6 firewall have been widely discussed here, but I think my situation with the guest network for smartphones and other mobile devices is not entirely clear. fact comparable because

  • Smartphones are designed to be directly connected to the Internet anyway, so shouldn't have any problems with open IPv6 ports.
  • This would only affect the completely separate guest network, any device in which, from the point of view of a device in my local network, would be any other device on the public Internet.

Is this good reasoning or is there something important that I can't see?

macos – Add bad blocks to bad block file in HFS +

I have a reader that seems to have bad blocks: sometimes reading or writing will cause Finder / Terminal / to hang regardless of the application that is using the reader. I read that I can use "badblocks" to scan the drive and produce a list of bad blocks, and I also read that there is a "bad block file" in HFS +, which contains a list of bad blocks, and the blocks are included in the allocation file. Is there a way / command to manually add bad blocks to this list?

I posted questions earlier that give more details about the player, but no functional answer is present: Force eject from a device

architecture – Is it a good / bad design to do CRUD from SPA like React?

I am creating a reactive web application using ReactJS as front end and MongoDB as back end. I have the two options below and would like to know which one is preferred for performing CRUD operations.

Approach 1: As I come from the traditional JVM-based MVC environment like JSP, Spring-boot and Spring-data, a traditional approach to perform user registration as a CRUD operation is as follows:

enter description of image here
The node instance effectively acts as a REST endpoint which performs an authentication search and retrieves http requests and performs transformations to a specific mongoDB document and saves the document. I can see that if I follow this approach, I will have to create and deploy a separate NodeJS process and a separate ReactJS instance.

Approach 2:
With SPA like React, we can also perform validation specific to recording and accessing data on the client side itself, and instead of making REST calls to Node, we can directly save the Mongo document . so that the design becomes like this:
enter description of image here

If I follow the above approach, I don't have to create and deploy a separate node instance. I create a deployment only React application via webpack.

The only benefit I see with Approach 1 this is that I will have flexibility to switch to a separate thick client like Angular, or to switch to a native client for IOS or Android. But switching to a native client can be done with React Native and all modules related to CRUD can also be used by React Native applications.

I would have preferred the Approach 1 because it offers flexibility but it looks like a YAGNI approach. So, I would like to know which of the models is used by the developer community. East Approach 2 an anti-model?

battery – Is it a bad idea to let your phone charge all the time?

For the first part of your question, see Ideal charge / discharge percentage for maximum battery life ?. The answers explain it in detail, so yes, that’s true

Part two, after How long… depend on how frequently you charge it the wrong way, your usage (how many charge cycles) etc. and you can't put a number on it. Keep in mind that batteries, even without being used, lose a significant life due to aging. See the answer here for more details. The depletion of battery capacity and its relationship to charging practices

[ Other – Society & Culture ] Open question: is Coronavirus really as bad as people claim it is?

I do not really see why it is worse than the swine flu or Ebola, when we had never closed the business of these two flu.

javascript – NODE Is it a bad practice to create a new Child Process.fork from an event close to the child process?

I have a long running node process that uses a single thread entirely, so I migrated some setInterval tasks to a node child process.

my child process is also long and handles things like database updates and data validation.

I am still in the process of implementing proper error handling in the child process and therefore I am closing the process on NetworkErrors or database errors and logging errors during generation.

Is it a bad practice to recursively recreate a child process from the close event manager?

const cp = require('child_process');

const setupErrorProcess = () => {
    let errorCheck = cp.fork('./childProcess/errorCheck.js', { stdio: 'pipe' });

    errorCheck.on('close', (data) => {
        setTimeout(() => {setupErrorProcess()}, 1000) //close event is emitted once
    });

    errorCheck.on('message', (msg) => {
      console.log('Message from errorCheck', msg);
      //handle message here
    });
}

setupErrorProcess();

If the Internet connection is lost, the child process will end and an exit event will be issued.

Too many bad internal links?

The advertisement

you don't

Advertise almost anything here, with CPM banner ads, CPM email ads and CPC context links. You can target relevant areas of the site and show ads based on the user's location if you want.

Starts at just $ 1 per CPM or $ 0.10 per CPC.

resolution – Why are 48mp images taken with a very bad 4k camcorder?

Video cameras can get away with much more noise due to the low resolution of the video and the noise spread over time. This camera has a sensor with a cropping factor of 7.9, which means that it has 1/64 the area of ​​a full-frame sensor. It would actually be surprising if it even had the 4K advertised on the longest side, but you have an 8K × 6K photo, so even assuming a real 4K sensor (which would be surprising at this sensor size ), only 1 in 4 pixels has a chance of not being the result of interpolation. But the staircase of oblique lines does not look like an interpolation by a factor of 2 but at least by 4, so that 1 pixel out of 16 (or more like 25) is based on data from ; real image rather than interpolation.

Which means this camera is a trash can, not a 4K camera. It can produce production 4000 pixels per line in video mode, but the actual resolution with which the camera works is much smaller.

The image sensor is probably similar to what was used 10 years ago in cellphones. EXIF data does not even contain a manufacturer or brand name.

Just write it down. This thing is a hoax.

digital – I took a picture with a 4k camera, and the picture is very bad

I bought a 4K ultra HD digital camcorder and took a picture with it. I don't know much about cameras or photography, but I expected the image to be sharp like high resolution images.

But when I zoom in at 100%, it gets pixelated. It was very unpleasant, so I quickly checked the properties of the image, see below.

enter description of image here

I thought a 4K image would be much better with a 100% zoom than a standard cell phone image from elsewhere. If I zoom in on this photo, it looks like it was taken with a 10 year old cell phone.

What is wrong with what I expect or what I do? Are there any settings for taking a high resolution image.

I look at the photo at a resolution of 8000 × 6000 on a 17 inch laptop if that matters.

Photo taken with phone and lighting Samsung A20s

Photo taken with camera and lighting

video – mp4 to m3u8 bad duration

My problem is that the m3u8 file has a bad duration and that the video stream has no fps.

final.mp4 and test.m3u8 have a different duration.

How can I fix it?

Source file:

Input #0, matroska,webm, from 'panasonic.mkv':
  Metadata:
    ENCODER         : Lavf57.66.105
  Duration: 00:00:46.12, start: 0.000000, bitrate: 4548 kb/s
    Stream #0:0: Video: h264 (High), yuv420p(top coded first (swapped)), 1920x1080 (SAR 1:1 DAR 16:9), 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
    Metadata:
      ENCODER         : Lavc57.83.101 libx264
      DURATION        : 00:00:46.083000000
    Stream #0:1: Audio: vorbis, 48000 Hz, stereo, fltp (default)
    Metadata:
      ENCODER         : Lavc57.83.101 libvorbis
      DURATION        : 00:00:46.115000000

1. Analyze the source file for remote encoding

ffmpeg -i panasonic.mkv -map 0:a -map 0:v -codec copy -f segment -segment_time $LEN -segment_format matroska tempfolder/chunk-%03d.orig

2. Encode all orig files

ffmpeg -i chunk-%03d.orig -threads 0 -vcodec libx264 -preset slow -crf 22 -vf scale="trunc(oh*a/2)*2:480" -threads 0 -acodec aac -ab 128k chunk-%03d.enc

3. Combine all enc files

echo "ffconcat version 1.0" > concat.txt
for f in `ls chunk-*.enc | sort`; do
    echo "file $f" >> concat.txt
done

ffmpeg -y -f concat -i concat.txt -map 0 -f matroska -c copy out.mp4

4. Repair file

ffmpeg -i out.mp4 -c copy final.mp4

5. MP4Box for the web

MP4Box -inter 500 -quiet -noprog final.mp4

6. Mp4 to m3u8

ffmpeg -i final.mp4 -c copy -start_number 1 -hls_list_size 0 hls/test.m3u8

Result

final output.mp4

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'final.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.20.100
  Duration: 00:00:46.30, start: 0.000000, bitrate: 743 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 852x480 (SAR 640:639 DAR 16:9), 610 kb/s, SAR 1:1 DAR 71:40, 24.91 fps, 50 tbr, 16k tbn, 50 tbc (default)
    Metadata:
      handler_name    : VideoHandler
    Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 127 kb/s (default)
    Metadata:
      handler_name    : SoundHandler

Test.m3u8 output

Input #0, hls,applehttp, from 'test.m3u8':
  Duration: 00:00:23.04, start: 1.480000, bitrate: 0 kb/s
  Program 0
    Metadata:
      variant_bitrate : 0
    Stream #0:0: Video: h264 (High) ((27)(0)(0)(0) / 0x001B), yuv420p, 852x480 (SAR 640:639 DAR 16:9), 50 tbr, 90k tbn, 50 tbc
    Metadata:
      variant_bitrate : 0
    Stream #0:1: Audio: aac (LC) ((15)(0)(0)(0) / 0x000F), 48000 Hz, stereo, fltp
    Metadata:
      variant_bitrate : 0