Computer: Algorithm
calculated Image enhancement   (+4, -1)  [vote for, against]
get something from nothing.

It is commonly accepted that image enhancement as seen on CSI and movies is impossible. If you enlarge a photo all you'll see is individual pixels. you can play around with levels, gamma etc to increase some clarity but besides that you're stuck.

I propose a technique that would let you increase clarity somewhat by back solving what an image should be.

First scale the image up several times so each individual pixel is now several sub pixels maybe 100 or so. Now think of a simple image say a monotone square or circle that has a blur put on it or is out of focus. To the eye you can still tell it's a square but it doesn't have sharp edges. Classic thinking is that it's impossible to make this image any clearer.

BUT if you know the blur radius in pixels (i.e) how many adjacent pixels from either side of the object border blurred together. you can back solve where the border was. And if you know where the border is you can have the program un apply the blur by seperating the tones out back into the original color of the object and background.

more simply put if you have a blurred region between a totally black and totally white area you can back solve where the border would be, by finding the 50% grey squares. There are only a finite number of color and location possibilities for the subpixels that make up a blurred larger pixel. If you have a blurred purple region, the original could only be red&blue or purple and by comparing and calculating this from the surrounding pixels you could get a good idea. Think of it as a check sum where each blurred pixel you're trying to figure out what sub pixels could factor together to make that.

adding more color and more complexity of course would limit the ability to back solve for contrast location but it should still be possible to get some enhancement if you find the best fit blur radius. This would work better the higher resolution the original image was.

This could also be used somewhat for decompression as you can back solve what input would be needed to make the compressed output (although there would be uncertainty limits)
-- metarinka, Nov 13 2010

Not Sure If This is The Same http://en.wikipedia...ki/Edge_enhancement
Image enhancement. [Boomershine, Nov 13 2010]

Information Theory http://en.wikipedia.../Information_theory
If the information you're after (a readable numberplate) can't be represented by the pixels you're starting with, then there's no way any enhancement will get to this information, as Claude Shannon might have said. [hippo, Nov 17 2010]

Camera takes pictures around corners. http://www.bbc.co.u...technology-11544037
Work in progress. [DrBob, Nov 19 2010]

http://en.wikipedia.../wiki/Shift-and-add [Spacecoyote, Nov 20 2010]

Isn't this what the brain does?
-- MaxwellBuchanan, Nov 13 2010


Sounds pretty much like what mine tries to do (I think). My display is somewhat limited, though.

[link]
-- Boomershine, Nov 13 2010


Image enhancement is good at making things *look* prettier or clearer. However, it is not very good at extracting additional information from an image, except in rare cases (for example, where the image has been distorted or corrupted non-randomly, and a logical inversion of the distortion or corruptation can be done; or where brightness or contrast can be expanded dramatically).

The best software for extracting information from low- grade or ambiguous data is the human brain. If you have a low-resolution image of a car number plate or a human face, the best option is generally just to enlarge the image and look at it.
-- MaxwellBuchanan, Nov 13 2010


[Ian Tindale] If you're following [MaxwellBuchanan]'s suggestion and using a human brain for your image processor, the answer you get will be "human face" pretty much independently of what the correct answer actually is.

Brains are really good at finding what they're looking for.
-- mouseposture, Nov 14 2010


[mouse] Can yours figure out where I left my keys, perchance?
-- Boomershine, Nov 14 2010


this works similar to edge enhancement but different. EE is trying to boost contrast at edges by making them artificially sharp (which actually lowers detail)

In my technique I'm trying to actually INCREASE detail, by mathematically calculating what most likely combination of sub pixels constitute one given pixel.

My technique would most likely be useful for loss of focus and blur where detail information is lost in a predictable fashion.
-- metarinka, Nov 14 2010


Have to agree with [mb], the problem with this technique is that you can already extract information down to about the 2-3 pixel width on any given image with various sharpen techniques. Below that, the information just doesn't exist in the image, so no amount of enhancement will create it.
-- MechE, Nov 16 2010


most things don't have perfectly smooth edges.
-- WcW, Nov 17 2010


Isn't this like the 'Sharpen' filter in most image manipulation programs? And [Max-B] is right - if the information isn't there then no manipulation of the image will find it - making a guess isn't the same thing, and won't help you reconstruct the image of a face or a numberplate. Information Theory (see link) is very interesting.
-- hippo, Nov 17 2010


//if the information isn't there then no manipulation of the image will find it//

I think there are ways of extract extra information out of a image 'beyond the pixels'. For example, if you have an image of a number plate you cannot read, you could combine the knowledge of typical characteristics of number plates with the image, to perhaps get more information than is present in the picture alone.

I vaguely recall a company called IFS which developed image compression software. The software would take an image and deconstruct it into overlapping fractals. One of the interesting effects of the compression is that you could zoom into the image to a higher resolution than the original image. Although the additional zoom could be considered a mere artifact of the compression, it would sometimes give an accurate reproduction of what the image would have looked like if photographed at a higher resolution. The explanation was that the world itself is somewhat fractal, thus encoding with fractals can somehow impart extra information not present in the captured image. I haven't heard of the company for many years (presumably defunct), so this should all be taken with a grain of salt.
-- xaviergisz, Nov 17 2010


//fractals can somehow impart extra information not present in the captured image// - no, if you're capturing an image and storing it as fractals, then whatever information can be gleaned from the fractal encoding must have been present in the captured image. Fractals aren't clever enough to just make stuff up. And in normal images made up of pixels, there is nothing 'beyond the pixels', and the number of pixels in an image put a hard limit on the information-carrying properties of the image. You might be able to infer things based on the subject matter of the image and some context and 'real-world' knowledge but that's different and in an information theory sense, you're adding vast amounts of information by doing this.
-- hippo, Nov 17 2010


//combine the knowledge of typical characteristics of number plates with the image// A Bayesian estimator, applied to image analysis. Perfectly feasible -- I'm sure it's been done ('specially video) -- but might not count as an implementation of this idea.
-- mouseposture, Nov 17 2010


/That way, someone can eventually ascertain if it is a car number plate or a human face./

One could marry an image analysis program to this, add some noise for stochastic resonance and do it iteratively. For example - you want to make out digits on a plate. Run multiple iterations of the metarinka process described here + an element of noise, with the image analysis program watching. When the image analyzer spits out digits you are done with that run. After 1000 runs, the digits or text produced most often by the image analyzer are your answer.

Likewise face: there is a program in my camera that recognizes a face and it is good. Once the metarinka process has refined an image such that is is recognizeably a face, that is your stop point. Face might not be as good as digits for a test of this.

I have been perusing innocentive.com after finding the site linked on the HB. I think there is a call for an image analysis program. You should take a look, metarinka.
-- bungston, Nov 17 2010


As [hippo] pointed out, and following from what [MaxwellBuchanan, ahem] said earlier:

1) enhancement can't create information that isn't there
2) the human brain is the best "image enhancer" in terms of extracting information from a poor image.

and

3) the only way you can get software to do something like this is to tell it what you think the image should look like (a face; a licence plate), and then let the software do its best. But even then, I doubt the software will do better than a human being who is told "it's a licence plate - can you read it?".

The only exceptions are where some very simple algorithm can remove some very simple problem with the image (eg, greatly expanding the global contrast; removing sharp noise which distracts the eye, and similar).
-- MaxwellBuchanan, Nov 17 2010


Bungston has caught onto my idea, and I like that evolution of it. multiple iterations ran past a human or image analysis program.

There will always be a threshold limit to how much information can be stored in given quantity of data. I think it's closed minded to say we are at the current limit of inference and information extraction technology. Current ideology says that each pixel is a discrete unit representing the intensity and wavelength of light provided to a CCD device at a given moment. millions of these pixels side by side, are interpolated in the human brain to represent distinct shapes (a licence plate of a face perhaps). No additional information is captured below this threshold of pixel size. I agree with that statement.

what I'm proposing is a recognition that there is a relationship between adjacent pixels that goes BEYOND edge ehancement principals. of boosting contrast based on existing contrast.

Basically stated in a blurred image such as gaussian blur, (or simply an attempt to increase ability to infer shapes) Each discrete pixel is not just the sum of the original information present in that grid but also effected by the content of other adjacent pixels (in this case light scattered in a predictable fashion) By working on this assumption; we can attempt to find a "best fit"shape of sub pixels that when applied to a blur or loss of detail event would produce an identical image. There's only a finite number of these subpixel combinations that could produce the final pixel.

By infering information from adjacent pixels (within a certain radius) I.E a blue pixel next to a purple pixel indicates that the subpixels are most likely comprised of blue, red, or purple subpixels; we can eliminate a vast majority of the possible sub pixel combinations and increase the ability for a human to interpolate discrete shapes of the macro image. There would be a limit to how much extra detail could be guessed at, It would probably be related to a fraction of pixel size and blur radius, as you decreased sub pixel size you would increase the number of possible combinations that could create a given given pixel and reduce your ability to eliminate possible combinations.

A much simpler way of looking at this is a best fit guess of each sub pixel to support a given pixel. we are guessing detail to support human inference.
-- metarinka, Nov 18 2010


About the best you can do without pulling data out of thin air is to treat the RGB subpixels as monochrome pixels, which would triple your resolution, so to speak, in one direction, and then interpolating the color information back on top. That doesn't really get you anywhere, focus, zoom and other optical issues generally become a problem before resolution does.
-- Spacecoyote, Nov 18 2010


of course if you had >1 camera adding an extra dimension, you could solve for angular differences and interpolate.
-- FlyingToaster, Nov 18 2010


//I.E a blue pixel next to a purple pixel indicates that the subpixels are most likely comprised of blue, red, or purple subpixels//

[metarinka] You're completely right. But my argument is that the human brain does this very well itself - we have tens of millions of years of evolution helping us to extract the best-guess answer from less-than perfect views.

Your last statement, // we are guessing detail to support human inference//, is closer to the mark. But, to the extent that your enhancement will improve the image, this is limited to the accuracy of the human inference. In other words, you're still not adding information, you're just making the picture look more like what you believe it to be.

The "face on Mars" can be "enhanced" to look much more facelike, if you give it to a programme which is told to find a face. This is great if you want to support the story, but it doesn't add any information.
-- MaxwellBuchanan, Nov 18 2010


//But my argument is that the human brain does this very well itself// And my argument is that the human brain is doing half-assed Baysian estimation. Full-assed Baysian estimation, with a proper computer, will be better, in at least some applications, because you'll have full knowledge of and control over the prior distribution.
-- mouseposture, Nov 18 2010


Maybe a purple pixel between a red and blue is half red and half blue. Or maybe it's purple. Is the pixel mixed, or is the paint? (Okay, this isn't super likely for sharp, discrete coloring, but very few objects truly have discrete coloring).

And there do exist algorithims/filters that take a pixel and look at the 8 or 24 around it and use that to map it's color. If you take an image, double the pixel count, and run one of these you'll get exactly what you're talking about, but it still doesn't increase the amount of information, just smooths out color transitions and reduces noise.
-- MechE, Nov 18 2010


//|f you take an image, double the pixel count, and run one of these....//

Spooky. That same though occurred to me today. I'm not sure what algorithm Photoshop uses for pixel-upping, but it's going to be basically what's described.

A guy in my lab invented the confocal microscope, and has done all sorts of things to push image resolution to its limits (wayyyy better than the wavelength of light, btw - that's another childhood myth down the pan). I am pretty sure that what he said boils down to "you can't make gold from shit", or words to that effect.
-- MaxwellBuchanan, Nov 18 2010


/"you can't make gold from shit"/

that gives me an idea...
-- bungston, Nov 19 2010


//A guy in my lab invented the confocal microscope// A guy in your lab's Marvin Minsky?
-- spidermother, Nov 20 2010


More often I have heard the term "You can't polish a turd".

I posted something way back about rebuilding resolution from those pixellated video images - the ones with anonymous informers.

If one squinted and the subject moved about then it was possible to get a mental image of the original face even if the pixel blocks were quite large. Perhaps something like that could be used for photos - maybe if the camera is deliberately panned during the shot.
-- Ling, Nov 20 2010


That's called shift-and-add or image stacking and its baked. [link]
-- Spacecoyote, Nov 20 2010


//A guy in your lab's Marvin Minsky?// No, but a guy in my lab is Brad Amos, who devised the useable (as opposed to proof- of-concept) confocal.
-- MaxwellBuchanan, Nov 20 2010


In a similar guise I also recall reading about a technique that uses the location and pattern of the RGB sub pixels in a CMOS chip to increase clarity.

since a CMOS chip is made of RGB rectangular elements. color detection on and off axis vary, as well as in the plane vary from location to location. I.E a red photon heating the Green-Blue sub pixel will produce noise or no reaction IIRC. Anyways knowing the order of the elements RGB, GRB etc and the aspect ratio of those subpixels they could help increase clarity by compensating for the amount of light that would bleed into the adjecent pixel or not be caught.
-- metarinka, Dec 21 2010


[Max] Doesn't Minsky get inventor credit though? Otherwise, people would go around saying the Wright Brothers 'invented' the aeroplane, and similar nonsense.

Still, kudos to both of them - at least one kudo each.
-- spidermother, Dec 21 2010



random, halfbakery