sigma – Can I add third party lens profiles to Nikon cameras or software?

If you’re willing to do an intermediate step using 16-bit TIFF files, the best way to get where you want is to:

  • Shoot .NEF, then use Nikon’s software to process your color, contrast, curves, HSL, etc. before outputting from the Nikon application to 16-bit TIFF. The color and other processing you do in the Nikon software will be “baked in” to the TIFF files. It’s just that with a 16-bit raster image you’ll have more latitude for additional adjustments in Photoshop or whatever other graphics processing software you choose to use than with an 8-bit JPEG.
  • Import your TIFFs into Photoshop (or Gimp, or Darktable, etc.) and apply correction for lens distortion, CA, etc., there.
  • If you have an image that you wish to process using multiple layers, you might need to export multiple TIFFs from Nikon’s Capture with the brightness, contrast, colors, etc. adjusted for what you desire in each layer. Then combine them using each TIFF for each layer of the image in Photoshop or other graphics processing software.
  • Another approach to such high dynamic range scenes would be to adjust everything to your liking in Nikon’s Capture, then expand the contrast to include all shades in the raw file before exporting. Sure, it will look “flat”, but all of your color adjustments will still be in there and should look very similar when you raise the black point and lower the white point to around the same levels in Photoshop.

There’s a lot to keep in mind about raw vs. TIFF:

The 12, 14, or 16-bits used to record raw data and the 16-bits (per color channel) used to record a demosaiced and gamma corrected TIFF or PSD are not used to represent the same exact thing in the same exact way, even though raw files often use containers that conform to the larger TIFF standard.

Raw files have one single, monochromatic brightness value for each photosite (a/k/a sensel or pixel well). Each sensel has a color filter over it that is one of three colors that are often referred to as “red”, “green”, and “blue. But the colors used in Bayer masks are NOT the same colors as the ‘Red’, ‘Greeen’, and ‘Blue’ colors emitted by our RGB display systems. To get any real color information out of the information contained in a raw file, the monochrome luminance values measured by each sensel are compared to the monochrome luminance values record by surrounding sensels filtered with each of the colors used in the Bayer mask and color information is interpolated from the results. This works as well as it does because the human vision system works in a remarkably similar manner. The wavelengths that the three types of cones in our eyes are most sensitive to are not ‘Red’, ‘Green’, and ‘Blue’ either!¹

What we usually mean when we say “TIFF files” are raster images that have a 16-bit number for each of the red, green, and blue channels for each pixel in the image. That’s why 16-bit TIFF files are so much larger than 14-bit raw files, even though they actually contain less total information.

For more, please see:

Why can software correct white balance more accurately for RAW files than it can with JPEGs?

RAW to TIFF or PSD 16bit loses color depth

Why are Red, Green, and Blue the primary colors of light?

1 The colors used in Bayer masks are not the same colors emitted by our RGB screens. “Red” is not even close. The “red” cones in our retinas are most sensitive to about 564nm (a lime-green color between green and yellow), the “red” filters in most Bayer masks are most transmissive to about 590-600nm (orange-yellow), while our RGB systems emit 640-650nm for ‘Red’.

sigma – Can I add third party lens profiles to Nikon cameras?

I’m starting to shoot directly in JPG to save time, since I don’t need RAW processing most of the time.

However, my D7100 and my Z50 have no built-in lens correction profiles for my Sigma 18-50 and 50-150 lenses.

Is there a way to hack the firmware to add support for third party lenses in camera?

I could shoot RAW and batch export JPG using Lightroom, but their standard profiles are not equivalent to what Nikon offers and I like the Nikon colours better.

Do any Nikon DX or point-and-shoot cameras perform in-camera HDR?

Can any Nikon DX format or point and shoot cameras perform in-camera HDR without the need for post processing with software? I have a D5600, A900, and P9000. Also considering a used D7100. (Please do not limit the answer to these cameras).

equipment recommendation – Compact cameras with built in hdr?

I’ve had a Canon ELPH310 HS compact camera for a number of years and it has fit my needs adequately. I’m noticing that my non top of the line phone is now exceeding my camera in capabilities so I’d like to upgrade my camera to one with built in hdr.

I can’t seem to find a reasonably priced current model compact camera with hdr. There is a good post on this subject here but it is 9 years old now. In this post, for example, the Panasonic Lumix ZS20 is mentioned as having hdr. When I look at a more current model, the Panasonic Lumix ZS60 (which is still not the most current model), it doesn’t seem to have hdr. I’m wondering why hdr seems to have been dropped over the years in some compacts.

I notice that the ZS60 does have an auto bracketing mode so that hdr can be accomplished in post processing but built in hdr would be so much handier for my purposes.

Why don’t many modern cameras have built-in flash?

Historically, most cameras have not had built in flash. As cameras for casual use became more sophisticated in the 80’s and bb90’s, built in flash became more common on mass market designs. The trend away from built in flash is probably just a reversion to the norm.

Also flash is useless for video. Video is increasingly a relevant use. Even for still photography, there’s nothing wrong with continuous light.

Finally, for manufacturers there are not a lot of positives in providing a flash. But negatives include shortening battery life between charges, increased likelihood of mismatch between user expectations and reality, and all the issues that come from additional complexity of design and manufacturing.

image manipulation – HDR smartphone cameras: is it done in hardware or software?

Since many smart phone “HDR apps” don’t actually do any real HDR so it’s useful to define what we are talking about.

HDR stands for “high dynamic range”, it’s designed for situations where the highlights are too bright and the dark areas are too dark so the camera can’t capture the entire range of brightness in a single shot.

HDR is done in two steps:

  1. In the first step the software creates an image that does have the entire brightness range, usually by combining multiple shots at different brightness levels – the result from this step is usually an image that looks bland and boring if viewed directly.

  2. In the second step the software takes the result from the first step and process it to enhance details, colors and local contrast – the produces the “HDR look” of images with saturated colors and details everywhere.

Many HDR apps only perform the second step, they take a single non-HDR image and process it to look HDR-ish, you can recognize them easily because they take just one image – for those apps you need a good camera that is capable of capturing the entire dynamic range of the scene (there are no “good” apps in this category since this is all cheating but you do want the better apps because the not-very-best in the category tend to push the saturation and processing way too far into “clown puke” territory and behind).

The real HDR apps take multiple shots – traditionally at least 3 – and combine them, if you have one of those the app is more important because as the entire HDR technique was invented to overcome camera limitations – so it works pretty well with limited cameras.

Obviously the camera does matter, you can get by with a bad camera and a good app but you’ll get better results from a good camera with a good app.

How to determine which cameras are front and back facing when using HTML5 getUserMedia and enumerateDevices APIs?

When accessing the camera using HTML5 getUserMedia APIs, you can either:

  1. Request an unspecified “user” facing camera
  2. Request an unspecified “environment” facing camera (optionally left or right)
  3. Request a list of cameras available

Originally we used the “facing” constraint to choose the camera. If the camera faces the “user” we show it mirror image as is the convention.

We run into problems, however, when a user does not have exactly 1 user-facing and 1 environment-facing camera. They might be missing one of these, or have multiple. This can result in the wrong camera being used, or the camera not being mirrored appropriately.

So we are looking at enumerating the devices. However, I have not found a way to determine whether a video device is “user facing” and should be mirrored.

Is there any API available to determine whether a video input is “user” facing the in these APIs?

Is it possible to do infrared photography with digital cameras?

The Sigma DSLR cameras do straight IR work without long exposures or altering the sensor – all you have to do is remove the dust protector (which takes a moment and requires no tools, and is easy to put back).

At that point you can do what is called a full-color IR where you capture the other portions of the spectrum but extend into IR, or you can use an IR-Cut filter like the Wratten filters (87b being an example of that). I have a Cokin-P filter holder, that I use with an IR-Cut Wratten gel – that way I can frame and focus and quickly drop the IR filter down into place before shooting.

You can find a number of images taken with an older version of the Sigma DSLR here:

http://www.pbase.com/sigmadslr/all_infrared

An Example:
enter image description here

Why don’t cameras shoot video past a specific fps? [closed]

I’d like to know on what basis does a camera model get it’s maximum framerate settings for videos.

Disc film cameras that use replacable batteries?

NO batteries required.

Use, or Modify?, a 4×5 Large format film holder so you can attach or tape the film, removed from the cartridge, in the middle of it. Assuming it will not interfere with the dark-slide, you may have to remove the plastic center piece in the disc of film.

Now use the 4×5 LF film holder in Pinhole camera that accepts 4×5 LF film holders. It will expose all of the negatives simultaneously. Make an enlarger negative holder for the film and print it as one photo. Or send it to a lab and have each neg printed and then assemble in the same orientation. Think outside the film cartridge.

Or variations on that theme. Experimentation encouraged.

NOTE: Of course all work with film will MUST be done in a dark room, I.E. A room with absolutely no light.

No affiliation with the zero image camera i linked to other then i own one and love it. (Another option is to make your own)