I am in the midst of creating a drivelapse using photos taken with my DSLR. I understand that DSLR images contain square pixels.
During video editing I remembered watching a Lynda timelapse tut where the presenter, within his video editing program (PreEl), interpreted his photos as "DV/D1 NTSC (0.9091)". That got me thinking "What's the difference in the final movie file between square pixels and interpreted pixels?", so I performed an experiment. I rendered one timeline with square pixel photos, another timeline with interpreted photos, and placed both output files on top of one another. Both timelines had the identical number of images and were output as 24fps files. Here is a snapshot of the result:
I think both of them look pretty good, but it is interesting how:
- Square pixels seem to pull the field of view into the viewer. Things look closer, but some visual information on left and right is lost.
- Interpreted pixels do the opposite; they push the field of view away from the viewer. Things look farther away, but there is more information on left and right sides of screen.
Before I directly compared them I wasn't aware of a difference. When I watched the square pixel video I thought, "That looks good." Then when I watched the intepreted video I thought, "That looks good too." Only when I put them on top of one another could I tell the difference.
Would someone kindly explain the square vs interpreted pixel thing to me?