c# – Using a double buffer technique for concurrent reading and writing?

I have a relatively simple case where:

  1. My program will be receiving updates via Websockets, and will be using these updates to update it’s local state. These updates will be very small (usually < 1-1000 bytes JSON so < 1ms to de-serialize) but will be very frequent (up to ~1000/s).
  2. At the same time, the program will be reading/evaluating from this local state and outputs its results.
  3. Both of these tasks should run in parallel and will run for the duration for the program, i.e. never stop.
  4. Local state size is relatively small, so memory usage isn’t a big concern.

The tricky part is that updates need to happen “atomically”, so that it does not read from a local state that has for example, written only half of an update. The state is not constrained to using primitives and could contain arbitrary classes AFAICT atm, so I cannot solve it by something simple like using Interlocked atomic operations. I plan on running each task on its own thread, so a total of two threads in this case.

To achieve this goal I thought to use a double buffer technique, where:

  1. It keeps two copies of the state so one can be read from while the other is being written to.
  2. The threads could communicate which copy they are using by using a lock. i.e. Writer thread locks copy when writing to it; reader thread requests access to lock after it’s done with current copy; writer thread sees that reader thread is using it so it switches to other copy.
  3. Writing thread keeps track of state updates it’s done on the current copy so when it switches to the other copy it can “catch up”.

That’s the general gist of the idea, but the actual implementation will be a bit different of course.

I’ve tried to lookup whether this is a common solution but couldn’t really find much info, so it’s got me wondering things like:

  1. Is it viable, or am I missing something?
  2. Is there a better approach?
  3. Is it a common solution? If so what’s it commonly referred to as?
  4. (bonus) Is there a good resource I could read up on for topics related to this?

Pretty much I feel I’ve run into a dead-end where I cannot find (because I don’t know what to search for) much more resources and info to see if this approach is “good”. I plan on writing this in .NET C#, but I assume the techniques and solutions could translate to any language. All insights appreciated.

technique – Best approach for creating the average of multiple profile photos?

We’ve all seen profile photos that are a mix of hundreds, or thousands of profile photos. Here’s examples:

https://duckduckgo.com/?q=average+look+of+people+from+every+country&t=osx&iax=images&ia=images

https://www.businessinsider.com/faces-of-tomorrow-2011-2

All examples that I’ve seen were made by an artist/photographer, who took the individual photos themselves, allowing for the most optimal alignment between individual photos.

With image recognition having taken huge leaps since this technique was first developed, is there software out there that can identify faces in individual photos, and then use those to superimpose them onto each other?

object oriented – Choosing between a Template Wrapping vs. Inheritance Technique

I wanted to introduce logging to potentially many classes in the future and I am curious about the best techniques. Personally, I would go the AOP way, but within C++, and having my compiler constrained, using a AOP compiler/library is not feasible.

At any rate, I was playing around with some potential approaches and currently have two, an inheritance implementation and a template wrapping implementation. Below is a complete example.

#include <iostream>

/*********************************************
    Inheritance technique
*********************************************/
class Measurements
{
    public:
        void start() { std::cout << "start" << std::endl; }
        void end() { std::cout << "end" << std::endl; }
};

class InheritanceOp: public Measurements
{
    public:
        void myop() { start(); std::cout << "in myop" << std::endl; end(); }
};

/******************************************
    template wrapper technique
*******************************************/
namespace components{
    class Operation
    {
        public:
            void myop() { std::cout << "in myop" << std::endl; }
    };
}
namespace aspects{
    template <class OP>
    class Measurements_Aspect: public OP
    {
        public:
        void start() { std::cout << "start" << std::endl; }
        void end() { std::cout << "end" << std::endl; }
        
        void myop()
        {
            start();
            OP::myop();
            end();
        }
    };
}
namespace configuration{
    using Operation = aspects::Measurements_Aspect<components::Operation>;
}
using namespace configuration;

int main()
{
    Operation op;
    op.myop();

    InheritanceOp iop;
    iop.myop();
}

I can see some shortcomings of both and I believe it’s strongly a case-by-case decision, but I was hoping I could get some opinions on the two techniques, thanks!

alt process – Is taking photos with the back side of a film roll a real technique?

Yes, this technique is known as redscale. You can find a decent amount of information by googling using that term, and see example photos on Flickr. As @MichaelC notes in the comments, on 35mm you’d have to figure out a way of extracting the film from the canister, invert it, and put it back.

You can even do this on B&W film, and there will be an effect, since the light will now be passing through the anti-halation layer first.

I don’t want to waste an entire film roll and realise it didn’t work.

It will work, in the sense that you will get some results out of it, but as with any alternative technique, you pretty much have to be prepared to waste rolls of film (more than one) as you experiment. 🙂 Plus, things can go wrong when unloading and reloading the film into the cartridge.

On the other hand, Lomography makes a redscale film that you load in the normal way – perhaps worth a try as a first approximation to see if you like the results.

Trying to find name of a problem solving technique

I heard the name of a problem solving technique, sounds like iso(s)? I believe it focuses on dividing different levels of a technical problem.

technique – Is there a known practice of post-processing to make a finished photo while viewing a subject?

Photographers often aim to create a work that accurately depicts a subject, and/or is informed or inspired by the experience they had looking at it.

A significant part of the work of making a photo is often in post-processing on a computer, rather than in preparing and taking the photo with the a camera.

So I wonder whether there’s any known practice of taking a computer to the subject (or vice versa) and creating the finished photograph while viewing both together. I’m sure that people have done this but my question is whether it’s a practice that has a name and perhaps prominent photographers have talked about doing, or prominent photography commentators have discussed.

Of course for small product photography this is likely to happen incidentally, and for street photography it’s usually impossible, so I’m thinking more about things like landscape, cityscape, and portraiture.

javascript – OOP refactoring technique

I’m teaching myself Object Oriented Programming in JavaScript and I’m looking over this small P5 code of CodingTrain’s No.78, which deals with flying particles in the canvas, as a material.

The full code is below:

// Daniel Shiffman
// http://codingtra.in

// Simple Particle System
// https://youtu.be/UcdigVaIYAk

const particles = ();

function setup() {
  createCanvas(600, 400);
}

function draw() {
  background(0);
  for (let i = 0; i < 5; i++) {
    let p = new Particle();
    particles.push(p);
  }
  for (let i = particles.length - 1; i >= 0; i--) {
    particles(i).update();
    particles(i).show();
    if (particles(i).finished()) {
      // remove this particle
      particles.splice(i, 1);
    }
  }
}

class Particle {

  constructor() {
    this.x = 300;
    this.y = 380;
    this.vx = random(-1, 1);
    this.vy = random(-5, -1);
    this.alpha = 255;
  }

  finished() {
    return this.alpha < 0;
  }

  update() {
    this.x += this.vx;
    this.y += this.vy;
    this.alpha -= 5;
  }

  show() {
    noStroke();
    //stroke(255);
    fill(255, this.alpha);
    ellipse(this.x, this.y, 16);
  }

}

Wanting to train my OOP skill, I’m trying to refactor this code into more sophisticated one in the OOP point of view. So I refactored it by adding Particles_Manipulation class and moving the process written in draw function into the Particles_Manipulation class as action method.
The code is below:

// Daniel Shiffman
// http://codingtra.in

// Simple Particle System
// https://youtu.be/UcdigVaIYAk

class Particle {

  constructor() {
    this.x = 300;
    this.y = 380;
    this.vx = random(-1, 1);
    this.vy = random(-5, -1);
    this.alpha = 255;
  }

  finished() {
    return this.alpha < 0;
  }

  update() {
    this.x += this.vx;
    this.y += this.vy;
    this.alpha -= 5;
  }

  show() {
    noStroke();
    //stroke(255);
    fill(255, this.alpha);
    ellipse(this.x, this.y, 16);
  }

}

class Particles_Manipulation{
  constructor(){
    this.particles = ();
  }
  
  push_particles(_n){
    for (let i = 0; i < _n; i++) {
      let p = new Particle();
      this.particles.push(p);
    }
  }
  
  action(){
    for (let i = this.particles.length - 1; i >= 0; i--) {
      this.particles(i).update();
      this.particles(i).show();
      if (this.particles(i).finished()) {
        // remove this particle
        this.particles.splice(i, 1);
      }
    }
  }
}

const my_Particles_Manipulation = new Particles_Manipulation();

function setup() {
  createCanvas(600, 400);
}

function draw() {
  background(0);
  my_Particles_Manipulation.push_particles(5);
  my_Particles_Manipulation.action();
}

Could you evaluate my refactoring is nice or not?

anima beyond fantasy – Ki Technique – Long Distance Attack + Projection

First off, want to thank everyone for their answers to me so far – you’ve all been really helpful and fast, and it’s great for a new GM to this system 🙂

This question is about the Ki Power Long Distance Attack. It states that you can add range to your attack, and then for base damage you :


“To Determine a Technique’s long distance base damage, the player
chooses either the damage produced by the hand held weapon, or a value
equivalent to twice the user’s base presence, plus his power bonus
(nevertheless the attack will not observe any of the special rules of
the grasped weapon).”


This brings up several questions :

  1. If the user is making a presence attack, what damage type is used? Is it unarmed? Is it the type of the held weapon? Is it energy?
  2. If the user uses his weapon’s base damage, is strength included? (since power is included in the presence version).
  3. If the user uses his weapon’s base damage, are secondary effects applied, such as say electric damage if you are air attuned and take that power? It says that rules for the weapon aren’t applied but I’m assuming that means like, tripping people or w/e you just get a normal attack…
  4. If the user takes the projection power, and teleports behind the person to make the attack – is it STILL a projectile? If he uses presence, does he actually attack with the weapon? If so, does he use the weapon’s damage type with 2x his presence as base damage? This wording though O_o.

My initial read is this –

If you use a weapon, its weapon BD + str bonus and you apply all effects of the weapon as if it had hit you in melee.

If you use presence, it’s 2x presence + pow bonus and you treat it like unarmed or Energy DT (I can’t decide which)

If you use presence and TP…I’m lost.

Thanks in advance!

autofocus – confusion about the principle of on-sensor PDAF technique

There are a lot of pictures over the internet illustrating the principle of phase detection autofocus, such as this one

What is PDAF and how does it work? Phase Detection Autofocus explained


enter image description here

The simplest way to understand how PDAF works is to start by thinking about light passing the camera lens at the very extreme edges. When in perfect focus, light from even these extremes of the lens will refract back to meet at an exact point on the camera sensor.

enter image description here

How Phase Detection Autofocus Works

When the light reaches these two sensors, if an object is in focus, light rays from the extreme sides of the lens converge right in the center of each sensor (like they would on an image sensor). Both sensors would have identical images on them, indicating that the object is indeed in perfect focus.

For on-sensor PDAF technique, there are many special pixels with an opaque mask over one half.
It may be look like this:
enter image description here

https://www.imaging-resource.com/PRODS/olympus-e-m1/ZTECH_PDAF_PIXELS.gif

The right-mask pixels and left-mask pixels are not adjacent.
How can the left image with left-mask pixels and the right image with right-mask pixels be identical when the object is in focus? From the first figure, the object points should be imaged into one pixel location when the object is in focus.

Technique for extreme depth of field in macro photography

Over thirty years ago I remember reading an article in a photography magazine which demonstrated a technique for getting extreme depth of field in a macro shot. The results which made such an impression on my teenage self that I’m recalling them now, were blades of grass in the foreground with an entire backyard in focus maybe 50 meters of depth of field. The setup was extremely homebrew, from what I remember combining large lenses threaded back to front, and the exposure was done over as many hours of daylight as were available.

The results looked like something from Honey I Shrunk the Kids which was a popular movie around that time. It didn’t look like traditional macro photography. It looked like small things were giant things. In the same way tilt shifting makes giant things look small. To this date, I’ve never seen photos like these, and I’m wondering if anyone knows of a technique to create similar photos. Looking back at it, it seems like black magic given the lengths one must go through using stacking to achieve even a few millimeters of depth of field in extreme macro photography.