h a l f b a k e r y
You could have thought of that.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
or get an account
Modern cameras have the ability to do exposure bracketing. Press the shutter button, and it will take several pictures (usually 3) with different exposures, from which you can later choose the best one.
With multidimensional bracketing, all operations of the camera will be bracketed simultaneously,
- shutter speed: to give different amounts of motion blur.
- aperture: to change the depth of field.
- focus distance: to change the selective focus, and to take into account the possibility that your focus is off.
- zoom: close, medium, and far, to eliminate the need to crop later.
So basically, when you press the shutter button, an exposure will be made for every combination of the above options. Assuming there are three possibilities for each, you'll have 3^4 = 81 exposures. It might take a few seconds for your shutter to open 81 times, but in the end you'll most likely have at least one picture that's optimal.
New Scientist - Plenoptic Camera
what MB might have been talking about. [xaviergisz, Aug 11 2007]
||I take it that film is not the media of choice here.
||And that your photog can handhold the image properly framed for the several seconds minimum it will take to do this.
||And that you've got the patience to sort through all those crap images to find the good one(s).
||Wasn't there an idea somewhere for a
"total recall" camera? The idea was that
the camera's job was simply to record all
aspects of the incoming light (ie, the
number, wavelength and direction of all
photons reaching the lens). Then, any
desired combination of aperture, shutter
speed, focal distance and depth of field
can be recreated.
||I thought this would be a way to make nested parentheses easier to read.
||[Maxwell] You wouldn't get the ability to choose different depths of field with that kind of camera - you would be able to do some choice of exposure and white balance, as you can at the moment with RAW files.
||It should also not only bracket the focus
and angle of view, but also the position
within an x-y-z coordinate system.
Thus, you can view the shot from a
slightly higher position than the camera
was held at, or slightly to the left, or
slightly forward. If this were also
applied to a pitch-yaw-roll coordinate
bracketing set, you could also view the
shot as it would be if you were facing to
your left a bit, or looking behind you, or
pointing up above
you. Combine the absolute x-y-z cube
with the relative pitch-roll-yaw camera
positions, and you could view the shot
not only as it was taken, but from over
there actually on the grassy knoll itself,
looking back at the photographer.
||It's a good thing nobody's invented the
bracketing polarising filter.
||//You wouldn't get the ability to choose
different depths of field// I disagree,
or perhaps I didn't explain very clearly.
If the camera recorded all aspects of the
incoming light, then surely it can
recreate any image which could have
been created from the incoming light in
the first place? I guess you would have
to record the direction of each photon
entering the camera (rather than the
point at which it impacted a "film", as in
a standard camera). If you do this, you
have all the information needed to
recreate any view of the scene.
||To re-emphasize, I don't think this is an
original idea - I'm sure I remember
seeing it on here.
||//It's a good thing nobody's invented the bracketing polarising filter//
<eyes CP filter, small motor and rubber friction wheel, rubs chins contemplatatively> Hmmm </efsmarfwrcc>
||I was thinking of bookshelf brackets,
that extended into the fifth dimension -
The entire British Library on single tiny
shelf, and the bad physics rules only
apply to main ideas, not annotations.