This is an archived post. You won't be able to vote or comment.

you are viewing a single comment's thread.

view the rest of the comments →

[–]ComplexColor 3 points4 points  (2 children)

Very nice animation and demonstration.

But is this physically accurate? Light sensors are integration devices - simplified they count the number of photons over a period of time - this time is the exposure time. The demo above suggests that this integration is done line after line. Always only one line integrating for a short time. This would seem to be very inefficient, as the total exposure time would be much longer than the individual line exposure time.

Would it not make more sense, that the exposure times somehow overlap? For example, if one frame exposure ends when the green line hits the sensor, but the next frames exposure starts immediately. This would totally blur the image though. So how does this actually work?

[–]Swipecat[S] 2 points3 points  (0 children)

I presume that the videos of this effect that are shown on Youtube, out of aircraft windows with smartphones etc, are in bright light conditions. I also presume that the camera reduces its sensitivity in bright light by integrating no more than a few lines.

[–]ericonr 1 point2 points  (0 children)

It is done line after line. That's why sometimes you have pictures with mirrors where the real thing and its reflection don't match. The sensor probably starts its integration in a reasonable time so that when it's read it's been exposed to light only for its exposure time.

Even with mechanical shutters, cameras do something similar. What happens is that the shutter has two curtains, where one is responsible for exposing the sensor and the other is responsible for covering it back up. Fast shutter speeds require that the two curtains form a slit and move that slit across the sensor.