The first digital camera was built in 1975, only 35 years ago. The world’s first commercial digital camera was released in 1989 (Fujix DS-1P) with an impressive 0.4MP resolution. In these 35 years we have seen cameras digitalise, take photos in colour, get smaller, have LCD screens, record video, increase photo resolution and be more affordable. But this isn’t all.
In 2004 the Lytro camera was the product of a Ph.D project by Ren Ng at Stanford. The technology within this camera was so groundbreaking that it was developed and made available on the consumer market in 2011.
The Lytro camera introduced the world to light field technology. Light field or plenoptic cameras work differently to digital cameras. Digital cameras use a grid (an array) of photosensors to record the incoming pattern of light. When the incoming light hits a sensor, it returns an electrical current. Because the amount of current returned changes depending on the amount of light, the digital camera can combine the different levels of current into a composite pattern of data representing the incoming light. Digital cameras breakdown the light pattern into a series of pixel values (binary), that can be sent to a computer and processed.
Light field cameras have a light field sensor that incorporates a micro-lens array (MLA), a special compound lens that is comprised of thousands of tiny lenses. These capture millions of individual rays of light in the scene along with their colour, luminosity and direction.
This had previously required hundreds of cameras and a supercomputer.
A ray of light can be thought of as having five dimensions; three are spatial – height, length and depth, as well as two more dimensions referring to the flow of light along the ray. The plenoptic function is the geometric distribution of light which comes from the Latin word “plenus” meaning full or complete. All of the light-fields are captured with each photograph, so a high volume of data is collected. By combining the hardware, data and specialist software, you can not only get a 3D image from a single shot, but also the ability to refocus a photograph after you take it. The ability to refocus and change the photos has led to the name “living pictures”.
Lytro cameras have completely redefined what is possible when taking pictures so much so, other tech companies have very recently developed their own light field-esque technologies.
Google Camera’s Lens Blur simulates a larger lens and aperture. It asks you to move the camera in an upward sweep to capture a whole series of frames. From these photos, the Lens Blur software on the mobile device uses computer vision algorithms to create a 3D model of the world by estimating the depth of every point in the scene. UFocus is the editing mode that the new HTC One (M8) phone comes with. The handset has two camera elements, one is the camera itself and the other captures depth information for post-processing purposes. Last week, Sony patented their own Lytro-esque light field sensor technology. Many more patents with ambitious aims in photographic technology are emerging. We might be looking at some very exciting new developments in how we take pictures and what pictures actually mean to us, pretty soon.
Light field technology has caused a subtle but fundamental shift in the way we handle visual information.
Lytro has changed the way we, as users, want to take and review our photos. We are now thinking differently about our photos; they are no longer about taking a snapshot of the moment, it’s now about capturing the moment.
Image Credit: Lytro