Building Block for Smarter Vision Systems

Wednesday, September 15, 2010 @ 08:09 PM gHale


Computer vision systems can struggle to make sense of a single image, but a new method may give the system a deeper understanding by reasoning about the physical constraints of the scene.
In much the same way that a child might use a set of toy building blocks to assemble something that looks like a building depicted on the cover of the toy set, the computer would analyze an outdoor scene by using virtual blocks to build a three-dimensional approximation of the image that makes sense based on volume and mass.
“When people look at a photo, they understand that the scene is geometrically constrained,” said Abhinav Gupta, a post-doctoral fellow Carnegie Mellon University’s (CMU) Robotics Institute. “We know that buildings aren’t infinitely thin, that most towers do not lean, and that heavy objects require support. It might not be possible to know the three-dimensional size and shape of all the objects in the photo, but we can narrow the possibilities. In the same way, if a computer can replicate an image, block by block, it can better understand the scene.”

Computer vision systems can struggle to make sense of a single image, but a new method enables computers to gain a deeper understanding of a scene.

Computer vision systems can struggle to make sense of a single image, but a new method enables computers to gain a deeper understanding of a scene.


This novel approach to automated scene analysis could eventually allow researchers to understand not only the objects in a scene, but the spaces in between them and what might lie behind areas obscured by objects in the foreground, said Alexei A. Efros, associate professor of robotics and computer science at CMU. That level of detail would be important, for instance, if a robot needed to plan a route where it might walk, he said.
Understanding outdoor scenes remains one of the great challenges of artificial intelligence. One approach has been to identify features of a scene, such as buildings, roads and cars, but this provides no understanding of the geometry of the scene, such as the location of walkable surfaces. Another approach, which Hebert and Efros pioneered with former student Derek Hoiem, now of the University of Illinois, Urbana-Champaign, has been to map the planar surfaces of an image to create a rough 3-D depiction of an image, similar to a pop-up book. But that approach can lead to depictions that are highly unlikely and sometimes physically impossible.
In the new method devised by Gupta, Efros and Hebert, they break the image into various segments corresponding to objects in the image. Once they can identify the ground and sky, they can assign other segments potential geometric shapes. They can also categorize shapes as light or heavy, depending on appearance; a surface that appears to be a brick wall, for instance, they would classify as heavy.
The computer then attempts to reconstruct the image using virtual blocks. If a heavy block appears unsupported, the computer must substitute an appropriately shaped block, or make assumptions the original image obscured the block.
This qualitative volumetric approach to scene understanding is so new, no established datasets or evaluation methodologies exist for it, Gupta said. In estimating the layout of surfaces, other than sky and ground, the method is better than 70 percent accurate, and its performance is almost as good when comparing its segmentation to ground truth, he said. Overall, Gupta assesses the analysis as very good for 30 to 40 percent of the images and adequate for another 20 to 30 percent.



Leave a Reply

You must be logged in to post a comment.