Spotlight, where we tell stories about Light

The L16: Under-the-hood

by Rajiv Laroia, Light co-founder and CTO

Since launching the L16, we have received many questions about how Light’s computational imaging technology works. While the final product specifications won’t be announced until spring 2016, we are glad to share a bit more information about our systems design approach to photography.

Below is a video of a talk I gave at Stanford University two weeks ago, which provides more details about Light’s technology. We're including some time markers to make navigating a bit easier. While this talk is still fairly broad, I hope it’ll answer a few more of your technical questions. You can also check out this great article by Tim Moynihan at WIRED to learn more about how the L16 will work.

Our team will be releasing more information and more images on our website and blog in the coming weeks, so stay tuned!

In a nutshell, what is the Light camera, and what problem are you solving? (1:50)

What are the innovations that make the L16 possible? (4:02)

Tell me more about molded plastic lenses… (4:52)

But how can plastic lenses possibly be as good as glass lenses? (5:15)

How are plastic lenses better? (6:45)

Tell me more about diffraction… (8:45)

Why are smartphone cameras not good enough? (10:20)

How does the L16 solve common smartphone camera issues? (14:52)

How does the L16 take a picture? (17:00)

What does the inside of the L16 camera look like? (21:20)

How does the L16 combine images? (22:25)

How does the L16 capture 10x the amount of light as a cell phone camera? (26:18)

How does the L16 give you continuous optical zoom from 35mm-150mm? (28:02)

How do we control depth-of-field, bokeh and perspective using computational imaging? (31:55)

What about High Dynamic Range (HDR)? (34:20)

What about low-light performance? (35:50)

What about aperture and shutter control? (37:00)

Can I see some pictures taken with the camera? (40:55)

If an object is close to the camera and it gets out of parallax, what do you do? (44:10)

How do you handle calibration on the cameras? (45:05)

Is there an optimal number of sensors and lenses? Why did you choose 16? (47:38)

Where does the processing happen? (48:27)

Is there a difference in color between each module? (49:40)

Is there a plan for image stabilization? (50:10)

How does the user interact with the camera? (52:06)

Does the camera capture video and do you plan to bring the technology to smartphones? (53:40)

How good is your battery life? (54:28)

How did you decide on the particular arrangement of the camera modules? (55:03)

What was your biggest surprise on this journey? (56:04)

Does the camera have a problem with distortion? (56:57)

Can you give some more detail about calibration? (58:00)

Will the L16 have enough computational capabilities? (58:39)