character creation – Could someone with a cast-iron mask burned onto their face speak?

character creation – Could someone with a cast-iron mask burned onto their face speak? – Role-playing Games Stack Exchange

macbook pro – Can MBP 16 inch(Late 2019) retina screen can be replaced with non retina screen(not apple)?is it possible?what problems i might face i do that?

macbook pro – Can MBP 16 inch(Late 2019) retina screen can be replaced with non retina screen(not apple)?is it possible?what problems i might face i do that? – Ask Different

shaders – Unity HDRP transparent material back-face rendering/front face culling

I try to render an object with 2 different transparent materials. One should only render the back faces (higher opacity/more color) and one only the front faces (lower opacity/less color). The purpose is to show another object inside the mesh better, while keeping the more saturated color on the back faces.

I tried to only render the back faces using a shader graph, but it seems to be impossible to only render back faces with TRANSPARENT material. With opaque materials, it was not a problem to isolate the back faces.

Is there a way to only render back faces of a transparent material, and overlap them with another material that only renders front faces?

(btw. this is the model, I need it for. The bones should pretty much preserve their details/color, while the jello is a rich green (screenshot from blender))

the problem child

Any help is very appreciated!
Thank you!

dungeon world – Where is the ‘Show the true face of Death’ move detailed?

It is a custom move for that example monster.

You can read more about how to create custom moves in the Making Moves chapter where it makes specific mention about monster moves:

Moves made by monsters against the players aren’t player moves at all. They’re monster moves, simple statements of what the monster does. Trying to make every monster move into a player move will seriously hamper your creativity.

c# – Azure Face Identification example

I want to use the Azure face API to identify persons from an image.

Last week I copied the example code found in the MS documentation.

But somehow I get wrong Results.
The confidence seems to be always the same, but the name of the detected person is different nearly every time.

My Code:

using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;

using Microsoft.Azure.CognitiveServices.Vision.Face;
using Microsoft.Azure.CognitiveServices.Vision.Face.Models;

namespace FaceDetection
{
    class Program
    {
        static string personGroupId = Guid.NewGuid().ToString();
        /*
        *   AUTHENTICATE
        *   Uses subscription key and region to create a client.
        */
        public static IFaceClient Authenticate(string endpoint, string key)
        {
            return new FaceClient(new ApiKeyServiceClientCredentials(key)) { Endpoint = endpoint };
        }

        /* 
        * DETECT FACES
        * Detects features from faces and IDs them.
        */
        public static async Task DetectFaceExtract(IFaceClient client, string url, string recognitionModel)
        {
            Console.WriteLine("========DETECT FACES========");
            Console.WriteLine();

            // Create a list of images
            List<string> imageFileNames = new List<string>
                            {
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection1.jpg",//jonas
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection2.jpg",//jonas
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection3.jpg",//sebe
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection4.jpg",//sebe
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection5.jpg",//marie
                                @"C:VS Code rootC#MachineLearningImagesFacesdetection6.jpg",//marie
                            };

            foreach (var imageFileName in imageFileNames)
            {
                IList<DetectedFace> detectedFaces;

                // Detect faces with all attributes from image url.
                detectedFaces = await client.Face.DetectWithUrlAsync($"{url}{imageFileName}",
                        returnFaceAttributes: new List<FaceAttributeType?> { FaceAttributeType.Accessories, FaceAttributeType.Age,
                        FaceAttributeType.Blur, FaceAttributeType.Emotion, FaceAttributeType.Exposure, FaceAttributeType.FacialHair,
                        FaceAttributeType.Gender, FaceAttributeType.Glasses, FaceAttributeType.Hair, FaceAttributeType.HeadPose,
                        FaceAttributeType.Makeup, FaceAttributeType.Noise, FaceAttributeType.Occlusion, FaceAttributeType.Smile },
                        // We specify detection model 1 because we are retrieving attributes.
                        detectionModel: DetectionModel.Detection01,
                        recognitionModel: recognitionModel);

                Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{imageFileName}`.");
            }
        }
        public static async Task IdentifyInPersonGroup(IFaceClient client, string url, string recognitionModel)
        
        {
            Console.WriteLine("========IDENTIFY FACES========");
            Console.WriteLine();

            // Create a dictionary for all your images, grouping similar ones under the same key.
            Dictionary<string, string()> personDictionary =
                new Dictionary<string, string()>
                    {
                        { "Jonas", new() { @"C:VS Code rootC#MachineLearningImagesFacesdetection1.jpg", @"C:VS Code rootC#MachineLearningImagesFacesdetection2.jpg" } },
                        { "Sebastian", new() { @"C:VS Code rootC#MachineLearningImagesFacesdetection3.jpg", @"C:VS Code rootC#MachineLearningImagesFacesdetection4.jpg" } },
                        { "Marie", new() { @"C:VS Code rootC#MachineLearningImagesFacesdetection5.jpg", @"C:VS Code rootC#MachineLearningImagesFacesdetection6.JPG" } }
                    };
            // A group photo that includes some of the persons you seek to identify from your dictionary.
            //string sourceImageFileName = @"https://cloud.jonas-heinze.de/index.php/apps/files_sharing/publicpreview/qJJmQAdjD5SzbGM?x=1920&y=629&a=true&file=detectionTarget.jpg&scalingup=0";
            string sourceImageFileName = url;

            // Create a person group. 
            Console.WriteLine($"Create a person group ({personGroupId}).");
            await client.PersonGroup.CreateAsync(personGroupId, personGroupId, recognitionModel: recognitionModel);
            // The similar faces will be grouped into a single person group person.
            foreach (var groupedFace in personDictionary.Keys)
            {
                // Limit TPS
                await Task.Delay(250);
                Person person = await client.PersonGroupPerson.CreateAsync(personGroupId: personGroupId, name: groupedFace);
                Console.WriteLine($"Create a person group person '{groupedFace}'.");

                // Add face to the person group person.
                foreach (var similarImage in personDictionary(groupedFace))
                {
                    Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                    PersistedFace face = await client.PersonGroupPerson.AddFaceFromUrlAsync(personGroupId, person.PersonId,
                        $"{url}{similarImage}", similarImage);
                }
            }
                // Start to train the person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {personGroupId}.");
                await client.PersonGroup.TrainAsync(personGroupId);

                // Wait until the training is completed.
                while (true)
                {
                    await Task.Delay(1000);
                    var trainingStatus = await client.PersonGroup.GetTrainingStatusAsync(personGroupId);
                    Console.WriteLine($"Training status: {trainingStatus.Status}.");
                    if (trainingStatus.Status == TrainingStatusType.Succeeded) { break; }
                }
                Console.WriteLine();


            List<Guid?> sourceFaceIds = new List<Guid?>();
            // Detect faces from source image url.
            List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
            //List<DetectedFace> detectedFaces = await DetectFaceRecognize(client, $"{sourceImageFileName}", recognitionModel);

            // Add detected faceId to sourceFaceIds.
            foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }

            // Identify the faces in a person group. 
            var identifyResults = await client.Face.IdentifyAsync(sourceFaceIds, personGroupId);

            foreach (var identifyResult in identifyResults)
            {
                Person person = await client.PersonGroupPerson.GetAsync(personGroupId, identifyResult.Candidates(0).PersonId);
                Console.WriteLine($"Person '{person.Name}' is identified for face in: {sourceImageFileName} - {identifyResult.FaceId}," +
                    $" confidence: {identifyResult.Candidates(0).Confidence}.");
            }
            Console.WriteLine();
        }

        private static async Task<List<DetectedFace>> DetectFaceRecognize(IFaceClient faceClient, string url, string recognition_model)
        {
            // Detect faces from image URL. Since only recognizing, use the recognition model 1.
            // We use detection model 2 because we are not retrieving attributes.
            IList<DetectedFace> detectedFaces = await faceClient.Face.DetectWithUrlAsync(url, recognitionModel: recognition_model, detectionModel: DetectionModel.Detection02);
            Console.WriteLine($"{detectedFaces.Count} face(s) detected from image `{Path.GetFileName(url)}`");
            return detectedFaces.ToList();
        }

        /*
        * DELETE PERSON GROUP
        * After this entire example is executed, delete the person group in your Azure account,
        * otherwise you cannot recreate one with the same name (if running example repeatedly).
        */
        public static async Task DeletePersonGroup(IFaceClient client, String personGroupId)
        {
            await client.PersonGroup.DeleteAsync(personGroupId);
            Console.WriteLine($"Deleted the person group {personGroupId}.");
        }
        static void Main(string() args)
        {
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            const string SUBSCRIPTION_KEY = "**********";
            const string ENDPOINT = @"https://westeurope.api.cognitive.microsoft.com";

            const string RECOGNITION_MODEL3 = RecognitionModel.Recognition03;

            const string IMAGE_BASE_URL = "https://cloud.jonas-heinze.de/index.php/apps/files_sharing/publicpreview/qJJmQAdjD5SzbGM?x=1920&y=629&a=true&file=detectionTarget.jpg&scalingup=0";

            IFaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);

            /*while(true)
            {
                Console.Write("Enter an image url >");
                string IMAGE_BASE_URL = Console.ReadLine();

                if(IMAGE_BASE_URL == "quit")
                {
                    break;
                }

                Console.Clear();*/

                IdentifyInPersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL3).Wait();
            //}          

            DeletePersonGroup(client, personGroupId).Wait();
            Console.WriteLine("end");
        }
    }
}

Thanks for your help.

Why face ID of iphone X and XS stop working?

I have seen plenty of used iPhone X and XS that say phone works fine except face ID. This made me quite curious as to why most iPhone X or XS have their face corrupted? Apparently its a depth camera, are depth cameras more susceptible to faults?

iphone – Enforce authentification (passcode, Touch ID, Face ID) for viewing saved passwords on iOS

I have surprisingly realized that I can view all my (Safari) saved passwords on my iPhone under iOS 14.4 without having to type my passcode (again). This is problematic as for a regular user, it would be easy for somebody to steal all their saved passwords by just borrowing their phone to “make a call”.

On macOS, if I want to view the saved passwords in Safari preferences, I have to type again my Mac password. Same for Chrome.

Is there a way to enforce this on iOS?

Thank you in advance for your answers!

c# – Help with making the transform.up face toward velocity while preventing transform.forward from changing? (Unity)

I have been working on a personal project that is suppose to mimic anthem (because why not). I am using a state machine for the character controller that I intend to integrate with the Unity’s Animation state machine.

I am calculating a new velocity that makes the player follow the mouse when flying, and I am making transform.up = Velocity so that the player appears to be going head first towards its velocity.The issue is that I am noticing that when you start to rotate in a circular way on the x plane the player will start to roll and its transform.forward moves from facing towards the ground (which is what i want) to facing the sky.

enter image description here

public class s_MechFlyt : MechStates{

  float flySpeedPercentage;

  public s_MechFlyt(in MechController Master) : base(Master){
    flySpeedPercentage = 2.5f;
  }

  public override void Update(){
    if (_Master.SprintButtonHeld){

      Quaternion Rot = Quaternion.Euler(-_Master.DMouse.y , _Master.DMouse.x , 0);

      _Master.Velocity = Rot * Vector3.forward * _Master.Stats.Speed * flySpeedPercentage;
      Vector3 Rotation = _Master.Velocity;**
      //Rotation.y = Mathf.Clamp(Rotation.z, 0, 50);
      _Master.transform.up = Vector3.Normalize(Rotation);
      //_Master.transform.rotation = Rot;

      //_Master.transform.rotation = Quaternion.RotateTowards(_Master.transform.up, _Master.Velocity, 5);

      _Master._Controller.Move(_Master.Velocity * Time.deltaTime);

      if (_Master.JumpButtonHeld){
        _Master.ChangeStates("hover");
        _Master.transform.up = Vector3.up;
      }

    }
    else{
      _Master.ChangeStates("fall");
      _Master.transform.up = Vector3.up;
    }
  }

  public override void EnterState(){
    _Master.CamController.ChangeStates("flying");
  }
}

Sorry if the code is a little cluttered, this is in prototyping phase to find a solution and will clean it up once I have it working.

The core code is in the public override update()

I will remove spots on your face within 2 hr for $2

I will remove spots on your face within 2 hr

*****WELCOME TO MY SERVICE*****
Hi there
I’m a professional graphics designer with 3 years of experience. I’m a photoshop specialist. I can do any kind of background removal, face spot removal, photo retouching, background change, object removal, etc. If you need any kind of service like these you can contact me.

WHY SHOULD WE WORK TOGETHER?

  • I have 3 years of experience in this sector.
  • You will get professional service.
  • 100% satisfaction guaranty.

MY SERVICES:

  • Spots remove.
  • Photo retouching.
  • Skin smoothing.
  • Change background.
  • Object remove.
  • Makeup correction.

YOU WILL SEND ME:

  • High-resolution image
  • Clear image

[Note: Please ignore the nude images. I don’t work with nude images]

‘If you any question please contact me before ordering’

REGARDS
Abdullah khan

.

unity – Photon2 billboards take a long time to face the camera

I’m using PUN2 and trying to use nametag billboards for my players, but they are not facing forward correctly and I have to wait more than 20 seconds to make the name tag of player return to facing forward.

here’s my code for my BillBoard Script:

public class BillBoard : MonoBehaviour
{
    protected Transform ThisCameraPlayerBillBoard ;


    private void Start()
    {
        ThisCameraPlayerBillBoard = GameSetup.GS.ThisCameraToBillBoard.GetComponent<Camera>().transform;

    }

    private void FixedUpdate()
    {
        transform.LookAt(transform.position + ThisCameraPlayerBillBoard .rotation * Vector3.forward,
            ThisCameraPlayerBillBoard .rotation * Vector3.up);
    
    }
}

DreamProxies - Cheapest USA Elite Private Proxies 100 Private Proxies 200 Private Proxies 400 Private Proxies 1000 Private Proxies 2000 Private Proxies ExtraProxies.com - Buy Cheap Private Proxies Buy 50 Private Proxies Buy 100 Private Proxies Buy 200 Private Proxies Buy 500 Private Proxies Buy 1000 Private Proxies Buy 2000 Private Proxies ProxiesLive Proxies-free.com New Proxy Lists Every Day Proxies123