We are always looking into technology whether it’s in 3D or 2D just to see what may be to come. After looking at this though we are really excited at what it will become when it gets into the 3D relm. Take a look for yourself.
Scientists at Duke University have built an experimental camera that allows the user—after a photo is taken—to zoom in on portions of the image in extraordinary detail, a development that could fundamentally alter the way images are captured and viewed.
The new camera collects more than 30 times as much picture data as today’s best consumer digital devices. While existing cameras can take photographs that have pixel counts in the tens of millions, the Duke device produces a still or video image with a billion pixels—five times as much detail as can be seen by a person with 20/20 vision.
A pixel is one of the many tiny areas of illumination on a display screen from which an image is composed. The more pixels, the more detailed the image.
The Duke device, called Aware-2, is a long way from being a product. The current version needs lots of space to house and cool its electronic boards; it weighs 100 pounds and is about the size of two stacked microwave ovens. It also takes about 18 seconds to shoot a frame and record the data on a disk.
The $25 million project is funded by the Defense Advanced Research Projects Agency, part of the U.S. Department of Defense. The military is interested in high-resolution cameras as tools for aerial or land-based surveillance.
If the Duke device can be shrunk to hand-held size, it could spark an alternative approach to photography. Instead of deciding where to focus a camera, a user would simply shoot a scene, then later zoom in on any part of the picture and view it in extreme detail. That means desirable or useful portions of a photo could be identified after the image was captured.
Taking a picture with a traditional digital camera “is like looking through a soda straw since you can only see a narrow part of the scene,” said David Brady, optical engineer at Duke, who led the team that designed the one-gigapixel camera. “Ours is more like a fire hose—the world comes at you full [blast].”
Dr. Brady said that when his team used the device to photograph the Seattle skyline, they were able to zoom in and read the “In” and “Out” signs written on a parking garage a half-mile away. Similarly, if the camera were used to take video images of a tennis match, say, the viewer could zoom in on a player, or on someone at the far end of the stadium, and see both images with equal clarity.
Details of the Duke camera were published Wednesday in the journal Nature.
Many scientists believe the age of such gigapixel photography isn’t too far away.
The Pan-Starrs telescope in Hawaii uses several gigapixel cameras, but it has a relatively narrow field of view. Some drones carry megapixel cameras, but they also tend to have a relatively narrow field of view. The Gigapixl Project, meantime, is using large-format film cameras to create a highly detailed coast-to-coast portrait of North America, focusing on cities, parks and monuments.
By comparison, the Duke device represents the “first cut” at making gigapixel cameras for general use, said Shree Nayar, a computer-vision researcher at Columbia University in New York, who has seen the camera at work but wasn’t involved in the project.
The challenge, he said, is to shrink the electronics and reduce the amount of power the system requires.
Another hurdle is that the capacity of these kinds of cameras to capture data-rich images is far outpacing the ability of computers to usefully process the millions of pixels that make up the picture. Engineers will likely have to come up with sophisticated software to bridge that gap.
The secret of the Duke device is a spherical lens, a design first proposed in the late 19th century. Although very effective spherical lenses exist naturally—the human eye, for example—researchers have long found it tricky to accurately focus images using lab-made versions
The Duke group overcame the challenge by installing nearly 100 microcameras, each with a 14-megapixel sensor, on the outside of a small sphere about the size of a football. The setup yields nearly 100 separate—but accurately focused—images. A computer connected to the sphere then stitches them together to create a composite whole.
The camera described in the Nature paper takes only black-and-white pictures. Dr. Brady said his team will finish building a 10-gigapixel color version by year-end and then will construct a 50-gigapixel device.
The team hopes to begin manufacturing industrial-type gigapixel cameras on a limited basis in 2013. But scientists estimate it would take at least several years before a hand-held, consumer version of the technology becomes available.
Have a question for the HD Guru3D?
Copyright 2012 HD Guru Inc. All rights reserved. HD GURU is a registered trademark.