h a l f b a k e r yIf you need to ask, you can't afford it.
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
browse anonymously,
or get an account
and write.
register,
|
|
|
Summary: Robot servant takes a 3D picture (or 2D picture if sufficient for the task) of coffee table and digests the data in each pixel of data. After the robot removes or shuffles the items on the table for cleaning, and then finishes cleaning, the machine refers to the pixel data in the original picture
to place all the items, perfectly, back in their original place and position.
Seems to me the most difficult task in programming a robot servant would be to have the machine return all items perfectly to their original place and position after cleaning a table top or such. A program could be written to record all the motions the robot made while removing items, and then reverse all the motions after cleaning, but the coffee table book on "The Evolution of Artificial Intelligence" may be replaced with the title facing the wrong direction.
Not being a programmer I have no idea what key words to search for to determine what the coding language would be to insert and remove images as needed from an algorithm change the code on the fly sa needed. Maybe this has been done for decades, maybe not.
Images converted to numbers
https://en.wikipedi.../wiki/Digital_image A digital image is a numeric representation (normally binary) of a two-dimensional image [Sunstone, Jul 06 2017]
[2] Image recogniton to assist robots in given tasks
www_2edigitalimagestospeech_2ecom Ask and it shall be given [Sunstone, Jul 06 2017]
Point cloud
https://en.m.wikipe...rg/wiki/Point_cloud [theircompetitor, Jul 06 2017]
[3] About very interesting 3D scanners
https://en.m.wikipe...org/wiki/3D_scanner [Sunstone, Jul 06 2017]
Computer vision
https://en.wikipedi...iki/Computer_vision Plenty of other info available elsewhere [TomP, Jul 07 2017]
[link]
|
|
// Seems to me the most difficult task in programming a
robot servant would be to have the machine return all
items perfectly to their original place and position after
cleaning a table top or such. // |
|
|
Regardless, I think your approach would solve the stated
problem. |
|
|
Except, in the last paragraph, are you saying the
algorithm is itself built out of images? |
|
|
Thank you for the question notexactly. The majority of the basic cleaning and navigation algorithm would be written in normal code, such as with a Neato robot vacuum. |
|
|
Only the pixel data would be be used in place of the human programmer, for the needs of the moment, versus having to utilize AI or have the human rewrite the code for each unique circumstance. |
|
|
The numbers data in each pixel would provide the numeric food the code thrives on and would only be used to manipulate the table surface deconstruction and reconstruction movements unique to each table. |
|
|
The robot of course could use "brute force" and continue to move items on the table until the pixel data of the original image matches the original image, like a human battles a Rubiks Cube, but this is obviously impractical... |
|
|
Searching for "computer solves Rubik's Cube" shows a method similar to this already exists and I assume the algorithms recognizing colors in order to solve the problem. |
|
|
The same problem solving procedure would be used in three dimensions to speed the "item on the table" replacement procedure... |
|
|
However, I don't know how or if, under present coding limitations, if any, to modify the Rubik's Cube procedure in code for this purpose, or add the extra step of dropping new images in [1] and taking old images out of the basic code as needed. |
|
|
Obviously this method would be useful for many things besides cleaning tables. Many robot waiter, waitresses and table cleaners are actually actors strictly killing time but otherwise having dreams of a fabulous career if they could just snare a speaking part in the next "Star Wars" sequel. |
|
|
[1] You ask the robot to bring you a Coca Cola from the fridge. The robot scans the Internet for images of a Coco Cola container on the Internet, retrieves the closest pixel match from the ice box, and brings it to you. You could tell the robot to bring a flower to your sweetie [2], etc. |
|
|
A point cloud describes a 3D space and is already used in
augmented reality apps |
|
|
Everything coming in to a digital image is at some level
pixels, but this talk is impossible without a notion of depth |
|
|
A point cloud or other 3D representation of the scene would
indeed be more effective than a 2D photo. Either way, the
robot needs to be able to identify individual objects in the
stored representation of the scene and match them with the
objects it sees after doing the cleaning. |
|
|
Thanks to "theircompetitor" for the very informative "point cloud" info and link. |
|
|
"While point clouds [created with 3D scanners] can be directly rendered and inspected, usually point clouds themselves are generally not directly usable in most 3D applications, and therefore are usually converted to polygon mesh or triangle mesh models..." |
|
|
"A 3D scanner collects distance information about surfaces within its field of view. The "picture" produced by a 3D scanner [3] describes the distance to a surface at each point in the picture. |
|
|
This need for conversion to mesh models, and the fact that "For most situations, a single scan will not produce a complete model of the subject. Multiple scans, even hundreds, from many different directions are usually required to obtain information about all sides of the subject" is definitely open to improvement so that point cloud data can be directly rendered to CAD and CAM/CNC equipment and robots without the need for conversion. |
|
|
It's very encouraging that code is available to convert point cloud data to CAD; I see no reason why, by similar means, image pixel data could not be converted to 2D CAD also, which would be quite sufficient to reconstruct a 2D table setting, versus the need to physically reproduce, in three dimensions, the items on the table. |
|
|
The 2D CAD data would be used to "CAD compare" the original table setup to the current setup, and manipulate the cleaning robot arms to replicate the original image the same way 3D CAD drawings direct the CNC equipment arms to create a 3 dimensional design. |
|
|
This is simply an application of computer vision [link], very
doable. |
|
| |