Monday , December 17 2018
Home / Cameras / Computational Photography Is Ready for Its Close-Up

Computational Photography Is Ready for Its Close-Up

[ad_1]

More than 87 million Americans traveled internationally in 2017, a record number according to the U.S. National Travel and Tourism Office. If you were among them, perhaps you visited a destination such as Stonehenge, the Taj Mahal, Ha Long Bay, or the Great Wall of China. And you might have used your phone to shoot a panorama, maybe even spinning yourself all the way around with your phone to shoot a super-wide, 360-degree view of the landscape.

If you were successful—meaning there were no misaligned sections, vignetting, or color shifts—then you experienced a simple yet effective example of computational photography. But in the past few years, computational photography has expanded beyond such narrow uses. It could not only give us a different perspective on photography but also change how we view our world.

What Is Computational Photography?

Marc Levoy, professor of computer science (emeritus) at Stanford University, principal engineer at Google, and one of the pioneers in this emerging field, has defined computational photography as a variety of “computational imaging techniques that enhance or extend the capabilities of digital photography [in which the] output is an ordinary photograph, but one that could not have been taken by a traditional camera.”

According to Josh Haftel, principal product manager at Adobe, adding computational elements to traditional photography allows for new opportunities, particularly for imaging and software companies: “The way I see computational photography is that it gives us an opportunity to do two things. One of them is to try and shore up a lot of the physical limitations that exist within mobile cameras.”

computational photography 1Getting a smartphone to simulate shallow depth of field (DOF)—a hallmark of a professional-looking image, since it visually separates the subject from the background—is a good example. What prevents a camera on a very thin device, like a phone, from being able to capture an image with a shallow DOF are the laws of physics.

“You can’t have shallow depth of field with a really small sensor,” says Haftel. But a large sensor requires a large lens. And since most people want their phones to be ultrathin, a large sensor paired with a big, bulky lens isn’t an option. Instead, phones are built with small prime lenses and tiny sensors, producing a large depth of field that renders all subjects near and far in sharp focus.

Haftel says makers of smartphones and simple cameras can compensate for this by using computational photography to “cheat by simulating the effect in ways that trick the eye.” Consequently, algorithms are used to determine what’s considered the background and what’s considered a foreground subject. Then the camera simulates a shallow DOF by blurring the background.

The second way Haftel says computational photography can be used is to employ new processes and techniques to help photographers do things that aren’t possible using traditional tools. Haftel points to HDR (high dynamic range) as an example.

“HDR is the ability to take multiple shots simultaneously or in rapid succession, and then merging them together to overcome the limitations of the sensors natural capability.” In effect, HDR, particularly on mobile devices, can expand the tonal range beyond what the image sensor can capture naturally, allowing you to capture more details in the lightest highlights and darkest shadows.

When Computational Photography Falls Short

Not all implementations of computational photography have been successful. Two bold attempts were the Lytro and Light L16 cameras: Instead of blending traditional and computational photo features (like iPhones, Android phones, and some standalone cameras do), the Lytro and Light L16 attempted to focus solely on computational photography.

The first to hit the market was the Lytro light-field camera, in 2012, which let you adjust a photo’s focus after you captured the shot. It did this by recording the direction of the light entering the camera, which traditional cameras don’t do. The technology was intriguing, but the camera had problems, including low resolution and a difficult-to-use interface.

Lytro

It also had a rather narrow use case. As Dave Etchells, founder, publisher, and editor-in-chief of Imaging Resource points out, “While being able to focus after the fact was a cool feature, the aperture of the camera was so small, you couldn’t really distinguish distances unless there was something really close to the camera.”

For example, say you’re shooting a baseball player at a local baseball diamond. You could take a photo up close to the fence and also capture the player through the fence, even if he’s far away. Then you easily change the focus from the fence to the player. But as Etchells points out, “How often do you actually shoot a photo like that?”

A more recent device aiming to be a standalone computational camera was the Light L16, an attempt at a producing a thin, portable camera with image quality and performance on a par with a high-end D-SLR or mirrorless camera. The L16 was designed with 16 different lens-and-sensor modules in a single camera body. Powerful onboard software would construct one image from the various modules.

Etchells was initially impressed with the concept of the Light L16. But as an actual product, he said, “it had a variety of problems.”

Light L16

For example, Light, the camera and photography company that makes Light L16, claimed that the data from all those little sensors would be equivalent to having one big sensor. “They also claimed that it was going to be D-SLR quality,” says Etchells. But in their field tests, Imaging Resource found that this was not the case.

There were other issues, including that certain areas of the photo had excessive noise, “even in bright areas of the image … And there was practically no dynamic range: The shadows just plugged up immediately,” says Etchells, meaning that in certain sections of photos—including the sample photos the company was using to promote the camera—there was hardly any detail in the shadows.

“It was also just a disaster in low light,” says Etchells. “It just wasn’t a very good camera, period.”

What’s Next?

Despite these shortfalls, many companies are forging ahead with new implementations of computational photography. In some cases, they’re blurring the line between what’s considered photography and other types of media, such as video and VR (virtual reality).

For example, Google will expand the Google Photos app using AI (artificial intelligence) for new features, including colorizing black-and-white photos. Microsoft is using AI in its Pix app for iOS so users can seamlessly add business cards to LinkedIn. Facebook will soon roll out a 3D Photos feature, which “is a new media type that lets people capture 3D moments in time using a smartphone to share on Facebook.” And in Adobe’s Lightroom app, mobile-device photographers can utilize HDR features and capture images in the RAW file format.

VR and Computational Photography

While mobile devices and even standalone cameras are using computational photography in intriguing ways, even more powerful use cases are coming from the world of extendedreality platforms, such as VR and AR (augmented reality). For James George, CEO and co-founder of Scatter, an immersive media studio in New York, computational photography is opening up new ways for artists to express their visions.

“At Scatter, we see computational photography as the core enabling technology of new creative disciplines that we’re trying to pioneer... Adding computation could then start to synthesize and simulate some of the same things that our eyes do with the imagery that we see in our brains,” says George.

Essentially, it comes down to intelligence. We use our brains to think about and understand the images we perceive.

“Computers are starting to be able to look out into the world and see things and understand what they are in the same way we can,” says George. So computational photography is an added layer of synthesis and intelligence that goes beyond just the pure capturing of a photo but actually starts to simulate the human experience of perceiving something.”

computational photography 2

The way Scatter is using computational photography is called volumetric photography, which is a method of recording a subject from various viewpoints and then using software to analyze and recreate all those viewpoints in a three-dimensional representation. (Both photos and video can be volumetric and appear as 3D-like holograms you can move around within a VR or AR experience.) “I’m particularly interested in the ability to reconstruct things in more than just in a two-dimensional way,” says George. “In our memory, if we walk through a space, we can actually recall spatially where things were in relationship to each other.”

George says that Scatter is able to extract and create a representation of a space that “is completely and freely navigable, in the way you might be able to move through it like a video game or a hologram. It’s a new medium that’s born out of the intersection between video games and filmmaking that computational photography and volumetric filmmaking [are] enabling.”

To help others produce volumetric VR protects, Scatter has developed DepthKit, a software application that lets filmmakers take advantage of the depth sensor from cameras such as the Microsoft Kinect as an accessory to an HD video camera. In doing so, DepthKit, a CGI and videosoftware hybrid, produces lifelike 3D forms “suited for real-time playback in virtual worlds,” says George.

Scatter has produced several powerful VR experiences with DepthKit using computational photography and volumetric filmmaking techniques. In 2014, George collaborated with Jonathan Minard to create “Clouds,” a documentary exploring the art of code that included an interactive component. In 2017, Scatter produced a VR adaptation based on the film Zero Days, using VR to provide audiences with a unique perspective inside the invisible world of cyber warfare—to see things from the perspective of the Stuxnet virus.

One of the most powerful DepthKit-related projects is “Terminal 3,” an augmented reality experience by Pakistani artist Asad J. Malik, which premiered earlier this year at the TriBeCa film festival. The experience lets you virtually step into the shoes of a US border patrol officer via a Microsoft HoloLens and interrogate a ghost-like 3D volumetric hologram of someone who appears to be a Muslim (there are six total characters you can interview).

“Asad is a Pakistani native who emigrated to the US to attend college and had some pretty negative experiences being interrogated about his background and why he was there. Shocked by that experience, he created ‘Terminal 3,'” says George.

One of the keys to what makes the experience so compelling is that Malik’s team at 1RIC, his augmented reality studio, used DepthKit to turn video into volumetric holograms, which can then be imported into real-time video game engines such as Unity, or 3D graphics tools such as Maya and Cinema 4D. By adding the depth-sensor data from the Kinect to the D-SLR video in order to correctly position the hologram inside the AR virtual space, the DepthKit software turns the video into computational video. A black-and-white checkerboard is used to calibrate the D-SLR and the Kinect together, then both cameras can be used simultaneously to capture volumetric photos and video.

Since these AR experiences created with DepthKit are similar to the way video games work, an experience like “Terminal 3” can produce powerful interactive effects. For example, George says Malik allows the holograms to change form as you interrogate them: If during the interrogation, your questions become accusatory, the hologram dematerializes and appears less human. “But as you start to invoke the person’s biography, their own experiences, and their values,” says George, “the hologram actually starts to fill in and become more photorealistic.”

In creating this subtle effect, he says, you can reflect on the perception of the interrogator and how they might see a person “as just an emblem instead of an actual person with a true identity and uniqueness.” In a way, it could give users a greater level of understanding. “Through a series of prompts, where you’re allowed to ask one question or another,” says George, “you are confronted with your own biases, and at the same time, this individual story.”

Like most emerging technologies, computational photography is experiencing its share of both successes and failures. This means some important features or whole technologies may have a short shelf life. Take the Lytro: In 2017, just before Google bought the company, Lytro shuttered pictures.lytro.com, so you could no longer post images on websites or social media. For those who miss it, Panasonic has a Lytro-like focusing feature called Post Focus, which it has included in various high-end mirrorless cameras and point-and-shoots.

The computational photography tools and features we’ve seen thus far are just the start. I think these tools will become much more powerful, dynamic, and intuitive as mobile devices are designed with newer, more versatile cameras and lenses, more powerful onboard processors, and more expansive cellular networking capabilities. In the very near future, you may begin to see computational photography’s true colors.

[ad_2]
Source link

About admin

Check Also

The Ricoh GR III Is Coming | News & Opinion

[ad_1] The Ricoh GR III is coming, just not in 2018. The sequel to the …

Leave a Reply

Your email address will not be published.