What are the best Epson Scan settings for scanning black and white negatives?

I use Epson V600 as a scanner and with the default Epson Scan application.
What is the best setting to use for the application?

enter the description of the image here

For example, what is the right kind of image, resolution?
Similarly, should I check the blur mask, grain reduction, etc.?

post-processing – Which color photo filter should I use to compensate for intense magenta over-emphasis on my color slides after scanning?

If your images simply had magenta, simply adding green would balance the result and you could be on your way. Unfortunately, you are not so lucky.

Your pictures are magenta because the other layers of film dye are broken. You have magenta information but you have lost cyan and yellow. Simply adding green will not suffice.

In a nutshell, you'll need to adjust the RGB color layers separately and dramatically increase the amount of blue and green. You may need to reduce the red. The process is detailed here: https://www.scantips.com/color.html

Once done with one, you could apply the same batch adjustments to the rest. However, this will probably only give you a start, as there is no guarantee that each slide is the same age.

Best image format of Epson scanning software to use for Lightroom

The Epson scanner application (for V600) allowed us to scan the image with 6
type of output. See the table below:

enter the description of the image here

What is the best format I should use to get maximum editing capacity in Lightroom?

mojave – Side scan with three viewfinders between late applications, sweeping up for mission control, then side scanning produces no lag

I experience a lag of about 1 to 3 seconds only when I slide sideways with 3 fingers between applications / desktops.

However, if I first slide with 3 fingers to activate the mission control, everything is fine and I can seamlessly slide between the desktops.

I noticed that if I pass 3 fingers on the trackpad, I will eventually be able to change my desktop. At this point, if I keep the motion running, I can switch between different offices.

All suggestions are welcome – I have changed my color profile and reset my nvram, as have suggested other mojave threads.

Artifacts – What can cause a detailed ghost image flipped horizontally in an image taken with a monochrome linear scanning camera?

Assuming standard optics (spherical or pseudo-spherical lenses symmetrical with respect to rotation around the optical axis) and "standard" cameras (no beam splitting or mirrors in the optical path), it is not necessary to There is nothing optically this will cause single axis lateral reflection ghost images, whether they are left-right, up-down or even diagonally oriented. This is because the lenses perform transformations for any set of orthogonal input axes (i.e. X and there axes). Both dimensions are transformed: left is exchanged for right and top is exchanged for bottom (in addition to scaling and probably some degree of distortion). In linear algebra, permutation X for -X and there for -Y mathematically equates to a rotation of 180 ° around the z-axis (ie around the optical axis of the lens). Thus, in the optically generated ghost images (again, with the "standard" optics), the ghost elements are all reflected by the center of the image, not just by a "fold line" "vertical or horizontal.

Moving away from standard optics, cylindrical sector lenses, which curve in one dimension (usually laterally) but not in the orthogonal (vertical) dimension, can create symmetrical left-right ghost patterns. Anamorphic lenses, or at least current anamorphic filters and adapters, come to mind. They compress the lateral field of view of a shooting lens that, when it is printed or processed, allows much wider lateral fields of view than the camera normally allows. This is often used to film a big screen cinema.

In addition to optics, I suppose that it is possible that some sensor technologies are likely to generate side "ghost" images, perhaps because of the way the sensor data is read or digitized. But that would be pure speculation on my part.

The last thing I can think of, at least in the optics or in the camera itself, is some kind of image reflection from the sensor on an optical path plane (like a filter plate or something behind the lens, close enough to the film / sensor plane), and then back to the sensor. But for the reflected image to appear even slightly in focus, the added reflection path must be quite short compared to the lens's back focus distance. This implies that the lens is extremely focused on the subject and that there would be a little distance between the exit pupil of the lens and the sensor. In addition, this reflecting surface should be concave (seen from the lens face) only in the lateral dimension. Frankly, this latter possibility is even more speculative and improbable than the previous paragraph.

Apart from the camera, the most obvious explanation is a reflection through a window, a car glass or other largely transparent but semi-reflective surface. This would explain the same degree of magnification and object reflected with the focus as the actual object in the picture.

hd wallet – Generation of Bitcoin addresses and UTXO scanning

I am developing a Bitcoin payment processing application.

I have two questions to ask:

  1. In each bip32 address, how long should each UTXO address consider generating to create a new address in order to avoid address reuse? For example, if address A is used twice, should I generate a new address or does it have a specified number for address reuse before generating a new one?
  2. When I have created a new address, I think I have to parse the following address for each address bypass path to see if each index has a fund or not:
    • Heritage
    • Segwit
    • Change address for Legacy
    • Change address for Segwit

What is the best way to scan addresses without sending as many requests (for each index) to the blockchain browser to scan the funds?

Thank you in advance.

Scanning – How to include a thin emulsion style border when scanning a movie?

What I'm doing is letting the scanner (or its software: I'm using Vuescan) find the frame in the negative and then slightly increase the frame size by including some of the shed (the unexposed edge of the negative) . This is not difficult to do.

(When printing negatives, I dropped the negative-holder process and now use glass that is difficult to obtain for the film to be well positioned, but otherwise much better.)

Are Linear Scanning Automata Complete?

Linear terminal automata are only Turing machines with finished ribbon, instead of infinite ribbon.

But that prevents them from Turing Complete? Why?

scanning – How to fix the error "The remote procedure call failed" in vb.net

That's my first question!
I'm trying to scan a paper with a scanner and store it. I use vb.net, wia, kodak scaner i1310.
I have the error "The remote procedure call failed (0x800706be)" when this line runs:
var imageFile = (ImageFile) scannerItem.Transfer (FormatID.wiaFormatJPEG);

and here is my code:

private void btnScan1_Click (Object sender, EventArgs e)

                                var deviceManager = new DeviceManager ();

DeviceInfo firstScannerAvailable = null;

for (int i = 1; i <= deviceManager.DeviceInfos.Count; i ++)

if (deviceManager.DeviceInfos[i].Type! = WiaDeviceType.ScannerDeviceType)
Carry on;

firstScannerAvailable = deviceManager.DeviceInfos[i];


var device = firstScannerAvailable.Connect ();

var scannerItem = device.Items[1];

var imageFile = (ImageFile) scannerItem.Transfer (FormatID.wiaFormatJPEG);

var path = @ "C:  Documents  scan.png";

if (File.Exists (path))
File.Delete (path);
imageFile.SaveFile (path);

Can a body help me?

scanning – What rendering intent to use when saving CIE-Lab values ​​in Silverfast

I do not think this has to do with the use of the CIE-Lab color space and you should only use the absolute colorimetric rendering intent if you are doing a color proof. The relative (or perceptual) intention is almost always a better choice, and I wonder why you want to use absolute colorimetry, or the CIE-Lab color space, from elsewhere.

Fraser, Murphy and Bunting have this to say about the intention of absolute colorimetric rendering in Real World Color Management, 2nd Edition:

Absolute colorimetry differs from relative colorimetry in that it does not map the source white to the target white. Absolute color rendering of a bluish-white source at a destination with yellowish-white paper places cyan ink in the white areas to simulate the output of a printer (including its white point ) on a second device.

I do not know why you've experienced such a marked color change, but most white printer papers contain optical brighteners that can confuse colorimeters. White papers without optical brighteners will generally have a yellowish hue, which will be detected by a good colorimeter when using the absolute colorimetric rendering intent. If you are scanning an art book, the paper will probably not contain optical brighteners.

Note that conversion algorithms between CIE-Lab and other color spaces may introduce inaccuracies, especially if you use 8-bit channels. Always use 16-bit channels with CIE-Lab.

If you like the hair on your head, respect the intention of relative or perceptual color rendering and use CIE-Lab only if you have a real reason to do so. There is nothing wrong with Adobe RGB. If you're scanning web content, keep it simple and use Perceptual and Adobe RGB.

Edit: Thinking back to my color management days, I had dreadful color changes if I inadvertently triggered a double conversion, input or output. However, I used Photoshop CS2 with Windows 2000. Neither the application nor the operating system knew what the other was doing. Maybe things have improved since then.