Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
It's as much a hovercraft as a pancake is a waffle.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.

user:
pass:
register,


                                                                                                                                     

Teleview Privacy Glass

Glass that randomly displaces rays within some radius but does not alter their direction
  (+6)
(+6)
  [vote for,
against]

Tele from the Greek word meaning "at a distance". I call it teleview glass.

If a type of glass could be constructed so that it displaces a "ray" vector randomly within some radius but does not alter its direction then some interesting uses exist.

Looking at stuff through the glass, everything would be blurred by a fixed size radius, not an angular cone the way frosted glass does. Lets say for example 10cm radius.

Objects closeby would be greatly blurred, people's faces unrecognizable and it would be suitable for use in bathrooms, showers or locker rooms.

Objects very far away, so far that the angular resolution of the human eye is significantly more than 10cm at that distance, would appear unblurred. So you could still have a view on mountains, clouds or the ocean. To a lesser extent, trees.

It would now be possible to have showers, locker rooms, dressing rooms etc. with a clear view at a distance and still full privacy.

I've already come up with one way to consruct such glass (very thick, expensive and need for precision), but the idea here is the concept. At some point I will render a sample picture in CG.

Eon, Apr 07 2013

Frosted Glass http://www.freeimagehosting.net/pty3j
Back ray-trace of regular frosted glass [Eon, Apr 08 2013]

Teleview Glass http://www.freeimagehosting.net/ahm4g
Back ray-trace of teleview glass [Eon, Apr 08 2013]

View http://www.freeimagehosting.net/bv159
View from behind teleview privacy glass [Eon, Apr 08 2013]

Rendered, with refraction http://i923.photobu...ion_zps0561c338.jpg
The near, small figure (red) is scrambled, as is the far, big (blue) figure. [MaxwellBuchanan, Apr 09 2013]

Same scene, without refraction http://i923.photobu...ion_zpsd4caa1dc.jpg
just to show there's no trickery. [MaxwellBuchanan, Apr 09 2013]

Same scene again, shown from the side. QED http://i923.photobu...ene_zps8f270a24.jpg
The "teleview" glass is the white vertical line to the right, as seen edge-on. [MaxwellBuchanan, Apr 09 2013]

Refraction by a slab at an angle http://i923.photobu...ion_zps2e3ef4b8.jpg
Red figure is near and small; blue is far and big. [MaxwellBuchanan, Apr 10 2013]

As above, but slab has refractive index of 1 http://i923.photobu...ion_zpsee29f94c.jpg
No refraction. The transparent slab is invisible. [MaxwellBuchanan, Apr 10 2013]

...and the whole scene viewed from off to one side http://i923.photobu...nce_zps5ee0df75.jpg
The glass block is grey here, for visibility. [MaxwellBuchanan, Apr 10 2013]

...and top view for [MechE] http://i923.photobu...iew_zps54c72c15.jpg
I've put a grey ball in place of the red figure, which was too small to see. [MaxwellBuchanan, Apr 10 2013]

The Impossible Lens http://www.wipo.int...studies/frazier.htm
A bit of the story of Jim Frazier. An interesting doco. [AusCan531, Apr 11 2013]

Obligatory Father Ted link http://www.google.c...m=bv.45107431,d.aGc
[spidermother, Apr 11 2013]


Please log in.
If you're not logged in, you can see what this page looks like, but you will not be able to add anything.



Annotation:







       Hang on a second. If the glass displaces bits of the image by 10cm in a random direction, and if this "10cm" is measured on the glass (rather than on the object being viewed), then a cow in the distance is going to be blurred by a cow's-image-length, whilst a naked person standing just on the other side of the glass will be blurred by less than a person's-penis- image-length.   

       I therefore contend that this will fail.
MaxwellBuchanan, Apr 07 2013
  

       This idea teeters precariously upon the word 'if'.   

       Welcome to the Halfbakery, [Eon].
Alterother, Apr 07 2013
  

       I was taking the "If" as a synopoeotic interjection.   

       My point was that, if the "if" were indeed satisfied, then the glass would still not fail to misperform as described.   

       But indeed, welcome to the hollowed grounds of the Halfbakery. In positing a material which not only doesn't exist, but would also fail to do the required job if it did exist, you have indelibly marked yourself out for greatness here.   

       Or it may just be a gravy stain.
MaxwellBuchanan, Apr 07 2013
  

       OK first, provided that the departure angle of rays aren't affected by the glass, there is no reason why a cow should be full length blurred. The blur radius remains 10cm regardless of distance.   

       As for feasibility of construction, imagine a 1D blur first. If this could be constructed then placing two perpendicular would give the desired effect. One can displace a ray by having it pass through some thick sheet of glass at an angle. Refraction happens twice, the second time restoring the direction but there is a net displacement of the vector. A 1D blur could be achieved by having a sandwich of such glass strips (say vertical), very thin, at various angles (tilts), each displacing the ray by some amount (vertically). Looking through the glass means looking at the side of the vertical sandwich. Between the slices there would have to be another medium. Total internal reflection keeps light from crossing between slices. Circular polarization on the two ends ensures that only rays that bounced an even number of times pass. Lots of light will be blocked, but from the remaining light the desired effect would be achieved, at least in theory. I didn't want to get into the details but I don't want my idea to be perceived as wishful thinking.
Eon, Apr 08 2013
  

       Around here you have to get into details or your idea _will_ be perceived as wishful thinking. This place is all about getting into details. Other 'bakers will complain if you don't.
Alterother, Apr 08 2013
  

       So, it's kind of a deliberately made out of alignment Fresnel lens?
not_morrison_rm, Apr 08 2013
  

       //The blur radius remains 10cm regardless of distance.//   

       Yes, but it's the radius of the blur as measured on the image as it passes through the plane of the glass.   

       Suppose you look at a face, through the window. Suppose also that both you and the other person are about a metre away from the glass. A light ray from their left ear to your eye, and a light ray from their right ear to your eye, will travel through the glass at points about 10cm apart. Your glass randomly displaces the image by 10cm, so their face will appear completely blurred.   

       Now you look at a distant mountain. Once again, you're about a metre from the glass. The mountain's size and distance are such that it appears about as wide as the person's head did.   

       A light ray from the left side of the mountain, and a light ray from the right side of the mountain, both travelling towards your eye, will pass through the glass about 10cm apart. Since your glass displaces the light rays by about 10cm, the image of the mountain will once again be fully blurred.   

       In other words, what's important is the *apparent* size of the object, as projected onto the window. A big object far away appears as large as a small object close up, and therefore both will be equally blurred. Still, apart from a fundamental misunderstanding, it's a nice idea.
MaxwellBuchanan, Apr 08 2013
  

       But what's with the pinhole camera?   

       Seriously, unless the glass can magically tell whether a light ray is coming from a small thing close up, or a big thing far away, this ain't gonna work.   

       If you think different, just sketch a diagram of the rays coming from opposite edges of a big distant thing and from a small near thing, to a point (the eye) and show me how they behave.
MaxwellBuchanan, Apr 08 2013
  

       One can simulate what would be seen by using backwards raytracing. It basically means only considering those rays that ended up hitting your eye from a certain direction and then asking where it collected light from by starting at the eye and tracing its path (collection area) in reverse. Consider someone standing next to the window on the outside and a big building in the distance. With the collection area indicated in red, link 1 shows what happens with normal frosted glass. From the observer's point of view, the amount of blur is more or less angularly constant. The absolute blur radius in world units increases with distance. Link 2 has teleview glass picture. The absolute blur radius is constant, or from the observer's point of view, angularly decreasing with distance. Link 3 shows what such an observer could see: That last image was just constructed from layers and pics I found on the internet, with some blur effects, not REALLY ray-traced. But I hope that helps explain it :)
Eon, Apr 08 2013
  

       I see the flaw in your reasoning - "parallel rays". Stop and think.
MaxwellBuchanan, Apr 08 2013
  

       I agree with [Eon] and [bigsleep]. It's fairly easy to see how this would work with a pinhole camera, but I'm pretty sure it would work equally well with a large aperture camera as well as long as it was focused on the distant object. And of course the catch is creating the material that can displace the rays of light without altering their angle at all.   

       [Eon], could you make a diagram of your proposed solution? I think I basically figured out what you mean, but my first impression was way off, and it's hard to provide criticism if I don't understand what you mean.   

       Does Iceland Spar have the basic required optical characteristics? It creates a double image of close objects (see picture in Iceland Spar link), yet doesn't appear to cause significant distortion of distant objects (see photo of Iceland Spar in Sunstone link). Of course making a bathroom window from these might not be practical since by the time you layered enough to get random displacement rather than double images, I suspect your clarity would be completely gone. I think this shows that such a window is theoretically possible.   

       Here's another implementation concept: Make some mini-blinds with flat slats. Each slat is a double sided half-silvered mirror. Set the blinds at 45 degrees. A perpendicular ray of light will have a 50% chance of going straight through. The other 50% will turn 90 degrees and hit the next slat. 25% will reflect off this and be offset by one slat spacing. 25% will go through this slat and hit the next with 12.5% reflecting at 2 slat spacings, etc. If viewed from 45 degrees, one way it will be transparent. At the other 45 degree angle only light reflected from inside will be visible. If there is glass in the space between the slats, the refraction will result in reduced undesired reflection. Transparency would of course be eliminated simply by having two layers with opposite angles on the slats, so you'd have at least 4 layers with different orientations to provide full privacy and 2D blurring.
scad mientist, Apr 08 2013
  

       So did you actualy understand what you just quoted any better than I did? Does that mean this it could theoretically be used for this purpose or not?
scad mientist, Apr 09 2013
  

       //do I need to do a full-on stochastic simulation//   

       No, just sketch me a picture showing:   

       (a) the observer
(b) the window, wherein light rays are displaced in a random direction by 10cm
(c) A small arrow close to the window
(d) A big arrow far away from the window, subtending the same angle as (c) and
(e) The light rays reaching the observer's eye from the head and tail of each arrow.
  

       Feel free to use colour.
MaxwellBuchanan, Apr 09 2013
  

       Yes, that's the sort of diagram of which I was thinking. And it shows that the amount of blur is the same for the near and the far object.   

       So you'll see a nearby face as an oval blob. You'll see the distant mountain as a mountain-shaped blob. No?   

       I'm not sure I see what you mean by your "effective X pixel resolution" labels, though. If you're saying that the visible resolution of a near object is 5 pixels whilst that of a far object is 20 pixels, then you're saying that the nearby face is *less* blurred than the distant mountain, which can't be right either.   

       Also, on the diagram, your rays (and viewing cone) converge to a point. What happens if you make the situation more realistic, i.e. the retina is a small screen placed just behind the point at which the rays (on your diagram) converge?
MaxwellBuchanan, Apr 09 2013
  

       OK, crayons and a large envelope back suggest that your model only applies for a "point sensor" (the point at which your light rays converge) which has zero spatial resolution.   

       If, in place of your point-like "observer" you instead place a lens (the eye's lens) of finite and fairly small diameter, and if you then place a curved screen (the retina) behind that lens, then things fall apart. If you want to consider a ray- tracing with the rays coming out of the eye and hitting the object (which, agreed, gives the same result as the actual situation of light rays going from object to eye), then you have to consider rays coming out of the eye over a range of angles from each point on the retina.   

       It's more complex than I had first thought, but I am still certain that it can't work, because the virtual image of any large, distant object will the the same as the virtual image of a small, near object when taken at a plane close to (and just on the far side of) the glass.
MaxwellBuchanan, Apr 09 2013
  

       OK, I modelled it in Cinema4D. Here's what I did:   

       (1) create a hexagonal column (hexagonal prism)
(2) Sliced the two ends of it at the same angle, so the two end-faces are parallel to eachother, but not at right-angles to the long axis of the column.
(3) Gave the column a refractive index of 1.5 (for glass). This column, viewed end on, will displace an image as described in the idea.
(4) duplicated this column 31 times, displacing it each time to make a row of columns just touching on their long flat sides.
(5) Rotated each successive column by 60°. Thus, each successive column will displace an image upward, up-and-left, left, down-and-left etc.
(6) Duplicated this entire row of columns 31 times, displacing each row
  

       OK so far? This gives us a "bundle" of hexagonal columns. They look like a honeycomb when viewed end-on. However, each of the columns will displace an image in a different direction. This entire collection of columns is our sheet of "teleview glass".   

       Are you comfortable that, so far, I'm complying with the idea?   

       So next:   

       (7) I put a viewpoint on one side of the "glass". Everything is now seen from an "observer" at that point.
(8) I added a human figure, placed on the far side of the glass and fairly close.
(9) I made a duplicate of the figure, and scaled it up by 10-fold in each dimension.
(10) I moved the enlarged figure further back from the glass (10-fold further from the viewer) so that it appears the same size as the first figure. I also moved it to one side, so you can see the near, normal-sized figure and the distant, giant figure side by side.
(11) I then rendered the scene.
(12) I turned off refraction (ie, I set the refractive index of the glass to 1.0) and re- rendered the scene, just to show that there's no trickery.
  

       (13) I also rendered the scene from a viewpoint off to one side, so you can see that the figures really are different sizes and at different distances from the "teleview" glass (which appears as a white vertical line toward the right of the image in the third link).   

       I'll provide a link to the images. In each case, the RED figure is the smaller, nearer one; the BLUE figure is the further, larger one.   

       To my eye, they are both equally scrambled. Tell me if you think differently.
MaxwellBuchanan, Apr 09 2013
  

       //The distortion is just an artifact of the column bundle and not just ray translation.//   

       Unlikely. C4D is generally pretty good. The columns are parallel. Their opposite faces are also parallel (I angled the ends by intersecting with a cube, i.e. both ends are 'cut' parallel to eachother). The image without refraction (but with everything else) is undistorted. But perhaps by 'column bundle' you are thinking of something else?   

       Short of getting this thing fabricated, I don't think I can go any further - I'm satisfied that it won't work, but it was an interesting problem.   

       (For the record, I also went on to try this with a two-fold finer pattern of prismatic glass; it gives essentially the same result.)   

       If you think differently, please feel free to go ahead with your stochastic simulation, or a rendering in whatever modelling software you prefer. Probably a "pinhole lens" is fine (and will be simpler) - just don't forget to trace through onto the imaginary retina.
MaxwellBuchanan, Apr 09 2013
  

       After point 6 I'm not happy as teleview glass can not be constructed this way. Many of the rays inside the hexagonal prism will undergo total internal reflection which does way more than displacement - those rays will end up leaving the prism with a different angle than they entered with. If you make the prism tubes short enough so that this doesn't happen a lot then the displacement will again be too short. It really isn't easy to construct such glass, I don't think it would be easy to simulate in an off-the-shelf ray-tracer (I wrote a basic one in C++, if I get time over the weekend I will generate an image).   

       Granted, my first two links show what happens for a single "pixel" sensor and yes, I assume that that pixel is infinitesimal. If I assumed the pixel had some size then yes, it would be a very thin cone that leaves the eye/camera and the region after the glass would very, very gradually open up. But the angle of this would be very, very tiny, like 0.01 degrees (as per angular resolution of sensor).   

       There is a spread of angles as you consider a collection of such pixels to ultimately make up an image, but this just adds up to the field of view. The blur is about what gets collected PER pixel and for this it truly consists of a collection area of nearly parallel rays. I say nearly to be pedantic, but really it is about as good as parallel.   

       not_morrison_rm gave me a good idea on how to construct this. Imagine birefringent glass with two sawtooth interfaces (at different frequencies to counter interference patterns), the short step interfaces are blocked with black paint. The sawtooth patterns are relatively inverted to ensure the same orientation of the flat regions on opposite sides. Ignoring some percentage of blocked light, such glass would split the image into two displaced ones. To construct teleview glass, have 7 or so such layers, each at random rotation and between the layers have quarter wave plates to convert the then linear polarized light back to circular (or basically just not linearly polarized) so that more levels of split-up can occur. With 7 layers there will be 128 displaced images summed together, probably good enough for privacy from eyes (although it is conceivable that someone could write a computer algorithm that could unscramble a digital picture taken). This whole thing is still about 70cm thick, so not yet elegant and compact.
Eon, Apr 09 2013
  

       //. Many of the rays inside the hexagonal prism will undergo total internal reflection which does way more than displacement - those rays will end up leaving the prism with a different angle than they entered with. If you make the prism tubes short enough so that this doesn't happen a lot then the displacement will again be too short.//   

       In the rendering, there's no internal reflection because there's no interface between the prisms. They are constructional elements, and the material (ie, transparent, r.i.=1.5) is applied to the whole.   

       Truly, I like the idea because it is not as simple a problem as it first seems, but my instinct and the simulation say no.   

       Perhaps [bigsleep] will do a rendering and find different results. It would be interesting if he can render a scene similar to mine.   

       [Eon] you might be interested by fibreoptic bundles, where the fibres needn't be parallel (for example, some flexible light-pipes use such bundles, where the position of one end of a single fibre doesn't correspond closely to the position of the other end). These non-parallel bundles "scramble" an image which is presented close to one end of the bundle. In theory you could take many such bundles (all fairly short) and bundle them side by side to make "glass" that displaced incoming rays by a short random distance. However, it wouldn't work for things far away, because of course the light rays from a distant object fan out and will strike everywhere on the bundle face (ie, you have to think about more than one ray from each point on the source).   

       // Of course. But with distortion, why don't you see large areas of red and blue ? The hex columns seem to have lost lots of the image rather than just moving it around.//   

       I think there's roughly as much red and blue in the distorted image as in the undistorted.   

       //Try much smaller columns e.g. half as thick end-on as their hex widt// They're not that far off as it stands - they're about as wide as they are tall. There are unlimitless possibilities (make each one less refractive or more refractive; make the face-angles steeper, put hair on the human figures) but, hey. Maybe there's a model that'll work - render me an example and I'll probably be convinced.   

       One way to achieve the desired effect with available resources would be to replace the window with a camera and screen. Just give the camera a shallow depth of field set far away. On the other hand, if the objective is to be able to look at the mountains whilst standing there bollock naked, the depth of field would be completely non-unirrelevant.
MaxwellBuchanan, Apr 09 2013
  

       MaxwellBuchanan, if there is no internal reflection then that is just as bad, because the rays will end up crossing into other prisms and have a high probability of them leaving from a different prism than they entered and finally refracting at an interface which is not parallel to the first one. Direction will not be maintained.
Eon, Apr 10 2013
  

       // [link]s. These are ray-traced images// Well, OK, I'm 2/3rds of the way to being convinced. I'll be at least 3/3rds convinced if I write my own code and get the same answer, and probably 4/3rds convinced if anyone can actually build this and make it work.
MaxwellBuchanan, Apr 10 2013
  

       [MaxwellBuchanan], can Cinema4D do half-silvered double sided mirrors? If so, it seems like it would be pretty easy to test my miniblind implementation. (last paragraph of my Apr 08 anno)   

       .........X..
.....x......
.............
/////////
.............
......o.....
  

       x is your red person.
X is your blue person (not to scale).
o is the camera.
///// is an array of half silvered mirrors that should be at 45 degrees.
  

       This only shows one layer, but that should be good enough to validate the concept. You could actually do a very simple version with just two large slats. That should just give you a double image. If the concept is correct, then the double images of the close red person will appear to be more missaligned than the double image of the far blue person.   

       I just realized I know how to test this concept in the real world. I have noticed that at night with the curtains open, our double pane windows create a double reflection. I should be able to compare the offset between the double images of near and far objects.
scad mientist, Apr 10 2013
  

       I am not convinced, and think that there is still an inherent assumption problem in this. Light coming from scenery to the human eye (or a camera) is not parallel. Viewed objects are not measured in terms of physical dimensions, but in degrees (or minutes) of arc. Thus, a single sensor can't tell if the object is small and near or large and far. It happens that, they way binocular vision works, our eyes can interpret the data to have a better idea, but that doesn't change the optical sense.   

       And no matter how far the object on the far side of the glass is from you, a 10cm displacement at the glass subtends the same angle with respect to the sensor (your eye). So it doesn't matter how far away the object is.   

       As far as your program, [Big], the problem is when you set the distance zones, you didn't scale the (flat) image the same way the eye would see it. Remember, an object 10x as far away will appear 10x smaller. Obviously the objects themselves are scaled because they were in the original photograph, but the image isn't. Shrink the portions of the image properly and run your test again, and see what happens. Or better yest, repeat with a simple checked pattern or similar, with the identical piece put at the three respective planes.
MechE, Apr 10 2013
  

       The 45 degree half silvered mirrors will only maintain direction for rays that bounce an even number of times (remember to consider non 90 degree rays) so won't work as is.   

       But we can block everything that bounces an odd number of times with same handedness circular polarizers on either side. Excellent! Thus far this is the easiest construction I have seen.
Eon, Apr 10 2013
  

       MechE, I think the source of disagreement boils down to this:   

       Imagine a single, 30cm thick, 45 degree sheet of glass:   

       Top view:   

       ........../   

       ......../..   

       ....../....   

       ..../......   

       ../....o...   

       /..........   

       The sheet only extends halfway up your field of view and you are looking at a mountain behind it, lower part through sheet, upper part over sheet. The question is: Is there an apparent split in the image of the mountain, a discontinuity from relative displacement, when comparing what is seen through the sheet and over it?   

       I'm guessing you say yes.   

       I say no, not for a far off object like a mountain, but yes for close things behind the sheet. This can be easily raytraced.
Eon, Apr 10 2013
  

       And I'm saying optics don't care about how far off the object is, just how much angle of view it covers, so...   

       Yes, there is a discontinuity.
MechE, Apr 10 2013
  

       If it's any help:   

       The main objection to my raytracing through an array of prisms was that some rays will be refracted between prisms, therefore being shifted in angle as well as being translated.   

       I could (had I the energy and patience) redo it, but give each hexagonal prism black side-walls. Then no light will pass between prisms, and only translation will occur. This would be no good for a real window (you could only look through it at one angle), but would do for p.o.c.
MaxwellBuchanan, Apr 10 2013
  

       It can, but I can't.   

       Anyway, I tried it with the same setup as before, but now with each hexagonal prism bounded (ie, divided from the others) by zero-reflectance black.   

       The results are inconclusive. This time, the *nearer* figure is noticeably clearer than the further one; they both have individual "cells" translated relative to their unrefracted positions.   

       However, I can't hand-on-heart swear that C4D is handling the refraction perfectly. It would have to refract each ray in an identical but opposite way on entering and leaving each hex cell. My guess is that, in making its angular calculations, it will round off at some finite limit, in which case the ray won't be restored to its original direction after passing through the prism.   

       Under the circumstances, I think the best answer will be from [bigsleep]'s software, since the direction of the ray can be guaranteed to be the same after refracting through the prism.   

       However, I'm still not certain that tracing rays from (or to - either way) a single point-like camera isn't screwing things up; and I can't be certain that [bigsleep]'s software takes all relevant factors into account. There are weird optics happening here, and I am not convinced that the usual "point like observer" is valid. But I am open to being persuasioned and convincified.   

       One thing that has emerged from the simulations: if this type of window is to be made, it would have to guarantee zero angular deflection after the displacement of the rays. Any residual angular deflection is going to make the distant image much worse than the near image - the exact opposite of what is wanted here.
MaxwellBuchanan, Apr 10 2013
  

       MaxwellBuchanan, would you be able to do that simple raytrace I described with a single thick 1.5 refractive index glass sheet at 45 degrees (near left to far right) that only extends halfway up the two colored bodies?   

       I would like to see the displacement discontinuities compared for the two bodies.
Eon, Apr 10 2013
  

       //a single thick 1.5 refractive index glass sheet at 45 degrees// Well, my old headmaster was right. I'll be buggered. See links - collapse of stout party.   

       The damn thing is counterintuitive, is what it is counter to.   

       I still reckon you'll struggle to implementize it, though.
MaxwellBuchanan, Apr 10 2013
  

       Okay, I'm having trouble explaining this, but it cannot work the way it is being described. Let's think about what happens if you offset the light rays uniformly, say with a prism. The light rays that come from an object that takes up five degrees of the field will offset exactly the same amount, regardless of whether that object is near field or far. If you randomize that offset, each pixel (for want of a better word ) will offset that same distance in a random direction. Thus each object will distort the same amount regardless of how far away it is, because it subtends the same portion of the view.   

       Far field objects don't have more pixels just because they are far away.
MechE, Apr 10 2013
  

       [MB] Can you show a top view? I think I may see the problem, but I'm not certain from the views available.
MechE, Apr 10 2013
  

       //Can you show a top view?//   

       Done, see link. I replaced the red figure (which was too small to see easily from the top) with a grey ball. The viewer (not shown) is at the bottom of the image, in the middle, looking upwards as drawn here.
MaxwellBuchanan, Apr 10 2013
  

       Yeah, I was thinking of that and wondering if it uses the same principle, or principal. Or both.
MaxwellBuchanan, Apr 10 2013
  

       [MB]- So, the problem with that representation is that the lens is closer to the viewer in one case than the other. That will produce a difference. Repeat the experiment with two identical lenses, one for each figure, and see what happens.   

       [Big] Done, and it still doesn't work out like this. An object that is twice as far away but subtends the same degree of the field of view will be the same apparent size at the optics, and thus experience the same apparent distortion.
MechE, Apr 10 2013
  

       The sheet in [MB]s ray trace is a lens, so I referred to it as such.   

       You'll notice in regard to your work, I used the more generic optics, which is correct.   

       I think I see what you're trying to say, and it involves treating the offset as a binocular baseline. I'm not convinced that it works, and I suspect it's impossible to build, but other than that you may have a point.
MechE, Apr 10 2013
  

       This is a fascinating discussion which I don't feel qualified to participate in, but would advise those others interested to watch the NatGeo documentary called "The Impossible Lens" about an Aussie named Jim Frazier. He came up with a lens with infinite depth of field after being told it was an impossibility.
AusCan531, Apr 11 2013
  

       //Far field objects don't have more pixels just because they are far away.//   

       I think the basic explanation of this is that they do - if you're not comparing like-for-like, but instead what you actually see nearby vs at a distance.   

       That is, if you imagine that everything is coloured in voxels, then a distant mountain will have many voxels per degree subtended at the observer, while a nearby object like a flower or human will have only a few.
Olay, so now if we consider that there is detail at all scales; that is if we look at a view we see nearby flowers in some detail, but on a distant mountainside we're seeing whole woods, the snow-line and so on - with the data from many trees, flowers rocks etc averaged together in each phtoreceptor.
Loris, Apr 11 2013
  

       //So, the problem with that representation is that the lens is closer to the viewer in one case than the other//   

       Ah, OK, you mean because the viewer is looking through the "nearer" part of the glass sheet at one figure, and through the "further" part for another?   

       If that's what you mean then, no, that's not the problem. I just replaced the two figures with vertical rods (one red, one blue), and lined them up so that the nearer (red) rod obscures the further (blue) rod, as seen with no refraction. Then I turn on refraction and re-render; the nearer (red) rod is now 'broken' (ie, the part of it behind the glass is displaced sideways), whilst the further (blue) rod is not.   

       I hate to say it, but I think the basic idea here is correct after all. In one sense it's counterintuitive (how does the glass "know" a light ray is coming from a distant as opposed to near object). In another sense it makes sense: the real and displaced rays (traced backwards from the eye) form parallel lines which converge (ie, zero apparent displacement) at infinity.
MaxwellBuchanan, Apr 11 2013
  

       At this point I have to agree on the concept working.   

       Basically, by having an offset between the input and output, you get a non-zero optical baseline, gaining the benefits of (pseudo) binocular vision at each point.   

       Now, however, I have to start quibbling about whether a material can exist that can do this. In order for it to work, a photon entering at a given point has to exit at a point between x and -x. If this exit is at all deterministic (as it would be from the fiberoptic version mentioned in annos), you would not get the advantage of a non-point optical baseline, as each point input would be a point output, and thus the distortion at a given point wouldn't care about the distance.
MechE, Apr 12 2013
  

       I think the problem with any fibre approach is that direction won't be preserved; a ray might enter a fibre at some angle (up to the acceptance angle of the fibre), and will be displaced by however much the fibre shifts on its path through the "window", but the ray will exit the fibre at a different angle.   

       It perhaps could be done by having an array of cameras, lensed in such a way as to have a narrow and parallel field of view, and each linked to a small display, but with the display of one camera not falling behind that camera itself.
MaxwellBuchanan, Apr 12 2013
  

       One-way glass would work better.
whlanteigne, Apr 12 2013
  

       It would, if such a thing existed.
MaxwellBuchanan, Apr 12 2013
  

       aka one-way mirror.
whlanteigne, Apr 12 2013
  

       aka as doesn't exist.   

       A "one way mirror" is just a piece of semi-silvered glass. When you stand on the more brightly-lit side of it, your reflection dominates the image you see. To reverse a "one way mirror", just reverse the intensities of the lighting on either side.
MaxwellBuchanan, Apr 12 2013
  

       //When you stand on the more brightly-lit side of it, your reflection dominates the image you see. //   

       Yes, as the inside of a house is darker that the outside during the day.   

       At night you would cover the window with "shades" or "blinds" or "curtains," all baked concepts; you wouldn't need to look outside at night, anyway, because it's generally too dark to see.
whlanteigne, Apr 12 2013
  

       // as the inside of a house is darker // except if it's a day with dark clouds and you have the bright lights around the bathroom mirror turned on. And even during a normal day, you'll have some rather annoying reflections reducing the quality of your view.   

       // you wouldn't need to look outside at night, anyway, because it's generally too dark to see. // That all depends. A person living in high-rise in the city would probably have a very nice city-scape to view at night. Other people might occasionally have a nice view of the moon.
scad mientist, Apr 12 2013
  

       //the window with "shades" or "blinds" or "curtains,   

       I'm just using big sheets of cardboard, with a horizontal gap at the top. That way (in theory) I can stargaze while lying in bed. But, too much cloud in the UK.
not_morrison_rm, Apr 12 2013
  

       //except if it's a day with dark clouds and you have the bright lights around the bathroom mirror turned on.//   

       Use the shades.   

       //A person living in high-rise in the city would probably have a very nice city-scape to view at night//   

       A person who can afford a high-rise apartment with a nice city view can afford the outrageous cost of the teleview glass. I think you found the target market: Donald Trump.
whlanteigne, Apr 12 2013
  

       So [whlanteigne] are you're still contenting that a half-silvered mirror with manual shades is "better", or are you just saying that the idea is highly unlikely to be implemented any time in the forseable future at a price that makes it a better value than standard half-silvered glass and blinds?   

       I agree with the later. I'd probably only pay a 20% premium to get this instead standard glass, which makes this a perfectly halfbaked idea.
scad mientist, Apr 16 2013
  

       [MB] Even with cameras (and why do you bother showing anything outwards with the cameras), you still have the problem that each camera is monocular at the point of incidence. If you had a series of cameras with the images overlaid, you could get the sort of near field blurring this describes, but, again with cameras why bother, just display only on the direction you want to.   

       Again, I do not believe this constructable with simple optical elements.
MechE, Apr 16 2013
  

       // I do not believe this constructable with simple optical elements// Oh, I dunno. Test-drive a pair of middle-aged eyes sometime. Stuff that's too far away to be relevant: clear as kodak. Stuff that's close enough to interact with: fuzzier than a Lithuanian's armpit.
MaxwellBuchanan, Apr 16 2013
  

       That's nice, but neither of those implement the idea as described.   

       An array of spinning micro mirrors or prisms such that each adjacent unit is scanning a different part of the viewable area would approach the idea, but still not the sort of uniform blurring that the idea suggests. That effect is what I think is not possible to produce.
MechE, Apr 16 2013
  

       //So [whlanteigne] are you're still contenting that a half-silvered mirror with manual shades is "better"//   

       Not "better," but it works, it's inexpensive, and I can go out and buy it today. In fact, there are films I can buy to apply to the glass to do the same thing, so I don't have to replace the windows. I just have to clean them well and apply the film carefully.   

       My bathroom window has a frosty-ish film on it that effectively blurs the view for anyone looking in at night; there is a strategically placed, discreet, 1/4" section cut out so I can peek out if I need to.
whlanteigne, Apr 17 2013
  


 

back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle