Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
Apply directly to forehead.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                   

Please log in.
Before you can vote, you need to register. Please log in or create an account.

Megapixel multiplier

Using the shake reduction system
  (+6, -2)
(+6, -2)
  [vote for,
against]

Recent cameras have a shake compensation system on the CCD so that for any movement during the time that the shutter is open, the sensor is moved to follow the path of the image. This usually works well: removing blur from the photos.

Some other cameras do this in the lens.

The CCD sensor is a set of side by side pixels, which are optimised either for red, green or blue. The Sigma Foveon sensor has the 3 colours stacked on top of each other. For now, let's consider the standard side by side system, but I think this could work for either.

Because of the different coloured sensors, there is a gap between adjacent same-colour pixels. Two reds have green and blue in between. There are some complicated and secret algorithms which are used by manufacturers to interpolate between adjacent reds, greens and blues. If each pixel (in 8 bit system) is measured 0 - 255 corresponding to brightness, then lets say red(1) = 100 and red(2) = 200. A linear algorithm can perhaps assume that half way inbetween is equivalent to 150. Whatever algorithm is used, there is an assumption going on.

My idea is to use the CCD antishake system together with multiple quicker exposures, and move the CCD fractions of a pitch each time. In that way, the algorithm doesn't need to assume an interpolation, because it has extra data. If only half a pitch was used in vertical and horizontal directions, then the resolution would be 4 times the original capability of the CCD.

Ling, May 02 2007

SupaImage http://www.imageip.com/biz/SupaImage2/
Improve your digital photos by combining the detail of two similar images into one of greater resolution (more pixels) [xaviergisz, May 02 2007]

Kontron ProgRes camera http://www.p.igp.et...les/DDD_hoengg.html
Scroll down a way - the camera's sensor is displaced by a piezo element. [AbsintheWithoutLeave, May 02 2007]

An explanation of the technology http://www.academic...m/tech/leg/pad.html
[AbsintheWithoutLeave, May 02 2007]

Mosaic removal idea Mosaic_20removal
Similar to SupaImage in some ways. [Ling, May 03 2007]

Jitter camera http://www1.cs.colu...ects/jitter_camera/
Prior art—does pretty much this, though not using the OIS system, and not considering the Bayer filter array AFAIR [notexactly, Dec 18 2018]

[link]






       I can't find a decent link, but I'm sure that there is research in this direction. What you describe is effectively a digital 'saccade' (the small movements your eye makes to generate a complete field of view). I've considered the same idea myself.
neutrinos_shadow, May 02 2007
  

       //move the CCD fractions of a pitch each time// Baked at least ten years back - I'll go look for a reference.
AbsintheWithoutLeave, May 02 2007
  

       Does "baked" mean "thought of before" or "implemented in a product available on the market"? I don't think it's baked in the latter sense.
Cosh i Pi, May 02 2007
  

       // I don't think it's baked in the latter sense.// Think again. [linky]
AbsintheWithoutLeave, May 02 2007
  

       It is a cool idea - but one I've come across recently on astrophotography forums - apparently, a webcam of fairly low resolution, tracking an object and taking multiple (say 50+) images, which are then interpolated via software, will produce a more detailed and clearer image than some of the best (and most expensive) commercially available equipment when taking a single, non-interpolated image.
zen_tom, May 02 2007
  

       Like the ultra-deep-space image taken by the Hubble telescope they showed on "Horizon" last night - it was a total of a million seconds (about 2 weeks) of exposure, interpolated to remove noise.
hippo, May 02 2007
  

       //finally get round to explaining the crux of the matter//
gram. "finally get round to explaining the crux of matter"
AbsintheWithoutLeave, May 02 2007
  

       //I bet it took the full half an hour of Horizon's slow-moving radio monologue on top of a visual slideshow to spooky ambient music to finally get round to explaining the crux of the matter// - Yes it did. The entire content of the 1 hour programme could have easily fitted into 5 minutes. It's a style of programming which promotes multitasking, so I was able to read the paper, chat to my wife, make some tea and check my eBay auction (an auction which may be of interest to you, actually) during the programme, all without missing any content.
hippo, May 02 2007
  

       SupaImage--is that the best they can do? In the link, it looks like they just doubled the pixels and softened the image. But I don't see any more detail.
ldischler, May 02 2007
  

       Hey, thanks for the links. A pity that the SupaImage is only available for the Mac. I'd like to play around with that.   

       This idea is actually related to the mosaic removal idea I had (where an anonymous person is represented by a small number of squares, each of a certain colour - not the original frame divided and muddled up). The idea was to remove the mosaic by using multiple frames of a video, and rely on the small movements of the person to provide enough information to rebuild the picture. A bit like squinting and watching.   

       Actually, that particular concept is very close to the SupaImage principle.
Ling, May 02 2007
  

       //a variable teeny moment // inertia or time? Or did you mean movement?   

       I'm missing something in your anno. Do you mean that the reflection from edges will be different depending on the original polarisation of light, and that this would help edge detection?
Ling, May 21 2007
  

       Ah, that's an interesting idea. I think you are talking about, basically, two types of photos in one: one for normal purposes, and another very short duration to see contrast for subsequent removal of blur (I think!). The problem is that I don't want to remove blur, but increase resolution. I suppose you could allow the normal handheld shake of the camera to produce the moving image, and then use multiple high speed exposures to build the higher resolution version.
Ling, May 21 2007
  
      
[annotate]
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle