What is this green thing in the sea?

Wondering if someone knows what is this green thing in the sea?

enter image description here

enter image description here

sharepoint online – What is this green box with a white arrow next to the word doc icon?

I was moving folders to a different location in sharepoint, where all the files within the folder moved except for this one.

What is distinct about this file from the others is the green box with a white arrow ned to the word doc icon. Refer to pic.
Icon
This file also seems to be corrupt as i get a Microsoft error “Word ran intoa problem opening this document in a browser. To view this document please open it in the desktop version..”
When I do, It is a blank document.

Does the arrow next to the icon indicate the file is corrupt or?

Any feedback is appreciated. Thanks

dnd 5e – Does Green Flame Blade and Sweeping Attack combo cause the bonus effect of GFB to hit again?

The bouncing effect of green-flame blade will only trigger once

The green-flame blade spell states (emphasis mine):

As part of the action used to cast this spell, you must make a melee attack with a weapon against one creature within the spell’s range, otherwise the spell fails. On a hit, the target suffers the attack’s normal effects, and green fire leaps from the target to a different creature of your choice that you can see within 5 feet of it. The second creature takes fire damage equal to your spellcasting ability modifier. (…)

The spell description constantly refers to a single creature that is the target of your melee attack and some different creature that is within 5 feet of them; thus, the spell has only one initial target. The Sweeping Attack Maneuver does not state anywhere that it increases the number of targets of a spell, only the attack itself. Green-flame blade will still only have one initial target so the bouncing effect only triggers once.


Similarly, the extra damage on the attack will only apply to the first target

The green-flame blade spell goes on to states (emphasis mine):

(…) At 5th level, the melee attack deals an extra 1d8 fire damage to the target (…)

Because there is only that same one target, and Sweeping Attack does not add additional targets for the spell, the second creature would not even take the fire damage that the first one did.


The Sweeping Attack Maneuver is weird

This feature states (emphasis mine):

When you hit a creature with a melee weapon attack, you can expend one superiority die to attempt to damage another creature with the same attack. Choose another creature within 5 feet of the original target and within your reach. If the original attack roll would hit the second creature, it takes damage equal to the number you roll on your superiority die. The damage is of the same type dealt by the original attack.

Reading this strictly we can conclude the following:

  1. The Maneuver attempts to damage another creature using the same attack; it neither attempts to hit another creature nor does it involve a second attack of any sort.

  2. You only damage the second creature if the original attack roll would hit them, not if it actually does hit them.

This feature is, at least to me, rather strange with all this wording. It makes for some weird instances where the second target has no effects on (dis)advantage or where critical hits only affect the first target, but that’s how it is worded.

At my own tables I would quite likely rule the second target as actually having been hit by the attack, though I’d have to think more about the implications of that ruling before I made it.

color – Why does the NEOWISE comet appear green on my photos?

Last night, I took some pictures of the NEOWISE comet. To my surprise, the comet has a greenish colour on all of them, e.g. this one which I’ve enlarged a bit:


(click to show the original image)

When viewing with the naked eye, it appeared mostly white; I’ve also seen blueish and orangeish NEOWISE photos on the Internet but never green. The green color also appears on the raw images. I’m wondering what might be the cause of this?

My setup:

  • Canon EOS 1100D, manual mode
  • I used two lenses (but the greenish colour appears on both):
    • EF-S 18–55mm, I think I have the IS II version
    • ET-60 75-300mm, the photo above was shot with this one at 75mm
  • f/4
  • 30 seconds exposure
  • ISO 100
  • Automatic white balance

color management – Why does the green gain affect the blue and red values on a CMOS sensor with individual gain control?

I’m taking pictures of little surfaces (5 x 5 Millimeters) in an experimental setup with an ids camera that contains a cmos sensor. Afterwards the pictures are split into the 3 color channels (red, green, blue) for further processing.

Before we started evaluating the pictures, my boss asked me to find out how different settings of the camera affect the pictures that are taken. The camera parameters allow for individually setting the gains for the overall brightness, as well as for every individual color (red, green blue). Any of these gains is said to take place inside the camera (analog gain, according to the manual), not in the software.

We then observed something we don’t understand, maybe somebody around here can help: Whenever we changed the settings of an individual color channel (for example, the gain for the green signal), the value of this signal changed in a proportional way. So far soo good. Bad thing is: The other 2 channels (which we expected to not be affected at all) reduced their values significantly throughout enlarging the gain for the green channel.

Can somebody tell us, why that is the case, wether it is a usual behaviour, or how we could stop the signals for red and blue being affected by the gain for the color green?

Additional information: White-balance is turned off (this setting enables me to adjust the individual color gains in the first place).
The camera Model is “UI-3280CP”, Version “C-HQ” (Color – High Quality) by “IDS”, the sensor is a global-shutter cmos sensor called “IMX264”.

Additional Information: The decrease in the colors “blue” and “red” stopped as soon as the color green got into saturation (reached the value 255):

Why are Red, Green, and Blue the primary colors of light?

TL:DR

Do primary colors really exist in the real world?

No.

There are no primary colors of light, in fact there is no color intrinsic in light at all (or any other wavelength of electromagnetic radiation). There are only colors in the perception of certain wavelengths of EMR by our eye/brain systems.

Or did we select red, green, and blue because those are the colors that human eyes’ cones respond to?

We use three-color reproduction systems because the human vision system is trichromatic, but the primary colors we use in our three-color reproduction systems do not match each of the three colors, respectively, to which each of the three types of cones in the human retina are most responsive.


Short Answer

There’s no such thing as “color” in nature. Light only has wavelengths. Electromagnetic radiation sources on either end of the visible spectrum also have wavelengths. The only difference between visible light and other forms of electromagnetic radiation, such as radio waves, is that our eyes chemically react to certain wavelengths of electromagnetic radiation and do not react to other wavelengths. Beyond that there is nothing substantially different between “light” and “radio waves” or “X-rays”. Nothing.

Our retinas are made up of three different types of cones that are each most responsive to a different wavelength of electromagnetic radiation. In the case of our “red” and “green” cones there is very little difference in the response to most wavelengths of light. But by comparing the difference and which has a higher response, the red or the green cones, our brains can interpolate how far and in which direction towards red or towards blue, the light source is strongest.

Color is a construct of our eye brain system that compares the relative response of the three different types of cones in our retinas and creates a perception of “color” based on the different amounts each set of cones responds to the same light. There are many colors humans perceive that can not be created by a single wavelength of light. “Magenta”, for instance, is what our brains create when we are simultaneously exposed to red light on one end of the visible spectrum and blue light on the other end of the visible spectrum.

Color reproduction systems have colors that are chosen to serve as primary colors, but the specific colors vary from one system to the next, and such colors do not necessarily correspond to the peak sensitivities of the three types of cones in the human retina. “Blue” and “Green” are fairly close to the peak response of human S-cones and M-cones, but “Red” is nowhere near the peak response of our L-cones.


Extended Answer

The spectral response of color filters on Bayer masked sensors closely mimics the response of the three different types of cones in the human retina. In fact, our eyes have more “overlap” between red and green than most digital cameras do.

The ‘response curves’ of the three different types of cones in our eyes:
enter image description here
Note: The “red” L-line peaks at about 565nm, which is what we call ‘yellow-green’, rather than at 640-650nm, which is the color we call “Red.”

A typical response curve of a modern digital camera:
enter image description here
Note: The “red” filtered part of the sensor peaks at 600nm, which is what we call “orange”, rather than 640nm, which is the color we call “Red.”

The IR and UV wavelengths are filtered by elements in the stack in front of the sensor in most digital cameras. Almost all of that light has already been removed before the light reaches the Bayer mask. Generally, those other filters in the stack in front of the sensor are not present and IR and UV light are not removed when sensors are tested for spectral response. Unless those filters are removed from a camera when it is used to take photographs, the response of the pixels under each color filter to, say, 870nm is irrelevant because virtually no 800nm or longer wavelength signal is being allowed to reach the Bayer mask.

  • Without the ‘overlap’ between red, green and blue (or more precisely, without the overlapping way the sensitivity curves of the three different types of cones in our retinas are shaped to light with peak sensitivity centered on approximately 565nm, 535nm, and 420nm) it would not be possible to reproduce colors in the way that we perceive many of them.
  • Our eye/brain vision system creates colors out of combinations and mixtures of different wavelengths of light as well as out of single wavelengths of light.
  • There is no color that is intrinsic to a particular wavelength of visible light. There is only the color that our eye/brain assigns to a particular wavelength or combination of wavelengths of light.
  • Many of the distinct colors we perceive can not be created by a singular wavelength of light.
  • On the other hand, the response of human vision to any particular single wavelength of light that results in the perception of a certain color can also be reproduced by combining the proper ratio of other wavelengths of light to produce the same biological response in our retinas.
  • The reason we use RGB to reproduce color is not because the colors ‘Red’, ‘Green’, and ‘Blue’ are somehow intrinsic to the nature of light. They aren’t. We use RGB because trichromatism¹ is intrinsic to the way our eye/brain systems respond to light.

The Myth of our “Red” cones and the Myth of “Red” filters on our Bayer masks.

Where a lot of folks’ understanding of ‘RGB’ as being intrinsic to the human vision system runs off the rails is in the idea that L-cones are most sensitive to red light somewhere around 640nm. They are not. (Neither are the filters in front of the “red” pixels on most of our Bayer masks. We’ll come back to that below.)

Our S-cones (‘S’ denotes most sensitive to ‘short wavelengths’, not ‘smaller in size’) are most sensitive to about 420nm, which is the wavelength of light most of us perceive as between blue and violet.

Our M-cones (‘medium wavelength’) are most sensitive to about 535nm, which is the wavelength of light most of us perceive as a slightly blue-tinted green.

Our L-cones (‘long wavelength’) are most sensitive to about 565nm, which is the wavelength of light most of us perceive as yellow-green with a bit more green than yellow. Our L-cones are nowhere near as sensitive to 640nm “Red” light than they are to 565nm “Yellow-Green” light!

As the simplified first graph above illustrates, there’s not that much difference between our M-cones and L-cones. But our brains use that difference to perceive “color.”

From comments by another user to a different answer:

Imagine an extraterrestrial alien who has yellow as a primary color.
She would find our color prints and screens lacking. She would think
we would be partially color blind not seeing the difference between
the world she perceives and our color prints and screens.

That’s actually a more accurate description of the sensitivities of our cones that are most sensitive to around 565nm than describing the peak sensitivity of L-cones as “red” when 565nm is on the ‘green’ side of ‘yellow’. The color we call “Red” is centered on about 640nm, which is on the other side of “orange” from “yellow.”

Why we use three colors in our color reproduction systems

To recap what we’ve covered up to this point:

There are no primary colors of light.

It is the trichromatic nature of human vision that allows tri-color reproduction systems to more or less accurately mimic the way we see the world with our own eyes. We perceive a large number of colors.

What we call “primary” colors are not the three colors we perceive for the three wavelengths of light to which each type of cone is most sensitive.

Color reproduction systems have colors that are chosen to serve as primary colors, but the specific colors vary from one system to the next, and such colors do not directly correspond to the peak sensitivities of the three types of cones in the human retina.

The three colors, whatever they might be, used by reproduction systems do not match the three wavelengths of light to which each type of cone in the human retina is most sensitive.

If, for example, we wanted to create a camera system that would provide ‘color accurate’ images for dogs we would need to create a sensor that is masked to mimic the response of the cones in dogs’ retinas, rather than one that mimics the cones in human retinas. Due to only two types of cones in dog retinas, they see the “visible spectrum” differently than we do and can differentiate much less between similar wavelengths of light than we can. Our color reproduction system for dogs would only need to be based on two, rather than three, different filters on our sensor masks.

enter image description here

The chart above explains why we think our dog is dumb for running right past that brand new shiny bright red toy we just threw out in the yard: he can barely see the wavelengths of light that we call “red.” It looks to a dog like a very dim brown looks to humans. That, combined with the fact dogs don’t have the ability to focus at close distances the way humans do – they use their powerful sense of smell for that – leaves him at a distinct disadvantage since he’s never smelled the new toy you just pulled out of the packaging it came in.

Back to humans.

The Myth of “only” red, “only” green, and “only” blue

If we could create a sensor so that the “blue” filtered pixels were sensitive to only 420nm light, the “green” filtered pixels were sensitive to only 535nm light, and the “red” filtered pixels were sensitive to only 565nm light it would not produce an image that our eyes would recognize as anything resembling the world as we perceive it. To begin with, almost all of the energy of “white light” would be blocked from ever reaching the sensor, so it would be far less sensitive to light than our current cameras are. Any source of light that didn’t emit or reflect light at one of the exact wavelengths listed above would not be measurable at all. So the vast majority of a scene would be very dark or black. It would also be impossible to differentiate between objects that reflect a LOT of light at, say, 490nm and none at 615nm from objects that reflect a LOT of 615nm light but none at 490nm if they both reflected the same amounts of light at 535nm and 565nm. It would be impossible to tell apart many of the distinct colors we perceive.

Even if we created a sensor so that the “blue” filtered pixels were only sensitive to light below about 480nm, the “green” filtered pixels were only sensitive to light between 480nm and 550nm, and the “red” filtered pixels were only sensitive to light above 550nm we would not be able to capture and reproduce an image that resembles what we see with our eyes. Although it would be more efficient than a sensor described above as sensitive to only 420nm, only 535nm, and only 565nm light, it would still be much less sensitive than the overlapping sensitivities provided by a Bayer masked sensor. The overlapping nature of the sensitivities of the cones in the human retina is what gives the brain the ability to perceive color from the differences in the responses of each type of cone to the same light. Without such overlapping sensitivities in a camera’s sensor, we wouldn’t be able to mimic the brain’s response to the signals from our retinas. We would not be able to, for instance, discriminate at all between something reflecting 490nm light from something reflecting 540nm light. In much the same way that a monochromatic camera can not distinguish between any wavelengths of light, but only between intensities of light, we would not be able to discriminate the colors of anything that is emitting or reflecting only wavelengths that all fall within only one of the the three color channels.

Think of how it is when we are seeing under very limited spectrum red lighting. It is impossible to tell the difference between a red shirt and a white one. They both appear the same color to our eyes. Similarly, under limited spectrum red light anything that is blue in color will look very much like it is black because it isn’t reflecting any of the red light shining on it and there is no blue light shining on it to be reflected.

The whole idea that red, green, and blue would be measured discreetly by a “perfect” color sensor is based on oft repeated misconceptions about how Bayer masked cameras reproduce color (The green filter only allows green light to pass, the red filter only allows red light to pass, etc.). It is also based on a misconception of what ‘color’ is.

How Bayer Masked Cameras Reproduce Color

Raw files don’t really store any colors per pixel. They only store a single brightness value per pixel.

It is true that with a Bayer mask over each pixel the light is filtered with either a “Red”, “Green”, or “Blue” filter over each pixel well. But there’s no hard cutoff where only green light gets through to a green filtered pixel or only red light gets through to a red filtered pixel. There’s a lot of overlap.² A lot of red light and some blue light gets through the green filter. A lot of green light and even a bit of blue light makes it through the red filter, and some red and green light is recorded by the pixels that are filtered with blue. Since a raw file is a set of single luminance values for each pixel on the sensor there is no actual color information to a raw file. Color is derived by comparing adjoining pixels that are filtered for one of three colors with a Bayer mask.

Each photon vibrating at the corresponding frequency for a ‘red’ wavelength that makes it past the green filter is counted just the same as each photon vibrating at a frequency for a ‘green’ wavelength that makes it into the same pixel well.³

It is just like putting a red filter in front of the lens when shooting black and white film. It didn’t result in a monochromatic red photo. It also doesn’t result in a B&W photo where only red objects have any brightness at all.
Rather, when photographed in B&W through a red filter, red objects appear a brighter shade of grey than green or blue objects that are the same brightness in the scene as the red object.

The Bayer mask in front of monochromatic pixels doesn’t create color either. What it does is change the tonal value (how bright or how dark the luminance value of a particular wavelength of light is recorded) of various wavelengths by differing amounts. When the tonal values (gray intensities) of adjoining pixels filtered with the three different color filters used in the Bayer mask are compared then colors may be interpolated from that information. This is the process we refer to as demosaicing.

What Is ‘Color’?

Equating certain wavelengths of light to the “color” humans perceive that specific wavelength is a bit of a false assumption. “Color” is very much a construct of the eye/brain system that perceives it and doesn’t really exist at all in the portion of the range of electromagnetic radiation that we call “visible light.” While it is the case that light that is only a discrete single wavelength may be perceived by us as a certain color, it is equally true that some of the colors we perceive are not possible to produce by light that contains only a single wavelength.

The only difference between “visible” light and other forms of EMR that our eyes don’t see is that our eyes are chemically responsive to certain wavelengths of EMR while not being chemically responsive to other wavelengths. Bayer masked cameras work because their sensors mimic the trichromatic way our retinas respond to visible wavelengths of light and when they process the raw data from the sensor into a viewable image they also mimic the way our brains process the information gained from our retinas. But our color reproduction systems rarely, if ever, use three primary colors that match the three respective wavelengths of light to which the three types of cones in the human retina are most responsive.

¹ There are a very few rare humans, almost all of them female, who are tetrachromats with an additional type of cone that is most sensitive to light at wavelengths between “green” (535nm) and “red” (565nm). Most such individuals are functional trichromats. Only one such person has been positively identified to be a functional tetrachromat. The subject could identify more colors (in terms of finer distinctions between very similar colors – the range at both ends of the ‘visible spectrum’ were not extended) than other humans with normal trichromatic vision.

² Keep in mind that the “red” filters are usually actually a yellow-orange color that is closer to “red” than the greenish-blue “green” filters, but they are not actually “Red.” That’s why a camera sensor looks blue-green when we examine it. Half the Bayer mask is a slightly blue-tinted green, one quarter is a violet-tinted blue, and one-quarter is a yellow-orange color. There is no filter on a Bayer mask that is actually the color we call “Red”, all of the drawings on the internet that use “Red” to depict them notwithstanding.

³ There are very minor differences in the amount of energy a photon carries based on the wavelength at which it is vibrating. But each sensel (pixel well) only measures the energy. It doesn’t discriminate between photons that have slightly more or slightly less energy, it just accumulates whatever energy all of the the photons that strike it release when they fall on the silicon wafer within that sensel.

usa – When I enter the US, should I show my old or new passport to the US immigration? Or does the US immigration only look at my US green card?

I am a French citizen domiciled in California and with US permanent residency (US green card). I exited the US using my French passport, and and got a new French passport when outside the US (because the old French passport’s condition wasn’t satisfying some airline’s employee, not because of it had expired and were about to).

When I enter the US (by plane), should I show my old or new passport to the US immigration? Or does the US immigration only look at my US green card? Note that my stay outside the US may be between 6 months and 1 year, in case this changes what the US immigration will look at. I’m not sure yet where I’ll be flying from but likely somewhere in Asia.

character creation – Delta Green and handedness

I am starting a post apocalypse Delta Green game where the group is tasked with trying to repair society. I had one person ask if they can use two weapons, one in each hand. I cannot find anything about this, can someone give me some help with this question. Our next session is the 2nd weekend in July.

Where’s a road of McMansions in New Zealand that opens up to green flat field and mountains?

When my family and we travelled to New Zealand, we were just driving around on a road in some suburb. We turned on a road of at least 40 town houses or houses packed in close together on postage-stamp-sized lots on a road. But we got surprised because this road ABRUPTLY separated the houses from the wilderness. The other side of road had just green grass flatland for kilometers until mountains. Anyone know where this is?

These pictures don’t show the abruptness, but I can’t find anything relevant.

Top. mid. Bottom.
enter image description here

enter image description here

enter image description here

Why are camera sensors green?

The color you see when you look at a “sensor” is usually determined by the combined colors of the colored filter arrays that are placed directly in front of the actual silicone chip as well as the combination of other filters (Low-pass, IR, UV) placed in the “stack” in front of the sensor.

Although we call them “red”, “green”, and “blue”, the colors of most Bayer masks are:

  • 50% “green” pixels that are centered on around 530-540 nanometers and significantly sensitive to light ranging from about 460nm to past 800nm and the edge of the infrared range. The “color” of 540nm light is perceived by most humans as a slightly bluish green color.
  • 25% “blue” pixels that are centered on around 460nm and significantly sensitive to light ranging from the non-visible ultraviolet range to about 560 nm. The “color” of 460nm light is perceived by most humans as a bluish-violet color.
  • 25% “red” pixels that are centered on around 590-600nm and significantly sensitive to light ranging from about 560nm to well into the infrared range. The “color” of 600nm light is perceived by most humans as a yellowish-orange color. (What we call “red” is on the other side of orange at about 640nm).

The “color” components of the Bayer mask can be seen by looking at spectral response curves for various sensors:

enter image description here

enter image description here

The “colors” each type of cone in the human retina are most sensitive to are similar:

enter image description here

Here is a representation for the “colors” humans perceive for various wavelengths of light:

enter image description here

Please compare the peaks of the sensitivities above with the “colors” of those wavelengths along the visible spectrum.

There are no coatings on most tri-color imaging sensors that is centered on what we call “red”, all of the drawings on the internet of CMOS sensors with Bayer filter arrays depicted notwithstanding.

enter image description here

Most CMOS sensors placed in cameras used for taking the types of images we consider “photography” here have a “stack” of filters that include both infrared (IR) and ultraviolet (UV) cut filters in front of the Bayer color filter array. Most also include a low pass “anti-aliasing” filter. Even sensor designs that are said to have “no low pass filter” tend to have either a cover glass with the same refractive index or the two components of a low pass filter oriented to each other so that the second one cancels the first one.

enter image description here

enter image description here

What one sees when one looks into the front of a camera and sees an exposed CMOS sensor is the combined effect of light reflecting off all of theses filters, and is dominated by the slightly bluish-green tint of the “green” filtered portions of the Bayer mask combined with half as many blue-violet and orange-yellow filtered portions that we call “blue” and “red”. When viewed sitting inside an actual camera, most of the light striking the sensor and the stack in front of it will be from a fairly narrow range of angles and usually be fairly uniform in color. (The purple tint on the edge of the Sony sensor is probably due to reflections of light at just the right angles off the UV and/or IR cut filters.)

enter image description hereenter image description here

When there is light from a wide range of angles falling on such a sensor without the filter “stack” in front of it, there will also be a prismatic effect evident that will show a fuller range of colors, due to the shapes of the surface of the microlenses on top and the colors of the Bayer mask sandwiched in between the microlenses and the sensor.

enter image description here