I've already asked these questions under Electrical Engineering SE, but it seems like I do not have an answer, so I try it here.
I wish to program something that exploits the effects of line-by-line scanning of an electronic shutter camera sensor, like in modern smartphones (CMOS). What I do not understand is:
Does each pixel line "only accumulate the exposure" by the time it is digitized, thus capturing only what is happening visually at the microsecond (or almost …) of the scan of a single line?
Can the scan / read time (and exposure itself) of each pixel line be changed to achieve a different scan time from one end to the other? ;picture?
Are long-exposure photos (or any exposure in general) composed of several faster full-sensor scans superimposed on each other? I ask you the following question because you do not end up with distorted images per section when you expose a moving scene, but rather with smooth motion streaks, which means that the exposure can not involve a only slow scan across the pixel array.
And my last question is: what is the usual scan / read time for a pixel line? An article I read said it could go up to 8 microseconds. Thus, an average camera for a 2,500 pixel wide smartphone takes 1/50 of a second to be read from the first line to the last one (and then you will have a 1/50 second rolling distortion). But if I take a picture outside with my Xiaomi Mi 6 while dimming the exposure, the phone chooses a declared exposure time of ~ 1/90 000 sec. So what does it mean? Have the explanations I read been wrong or just poorly explained?
Thank you in advance.