In image processing, computer graphics, and photography, high dynamic range imaging (HDRI or just HDR) is a set of techniques that allows a greater dynamic range of luminances between light and dark areas of a scene than normal digital imaging techniques. The intention of HDRI is to accurately represent the wide range of intensity levels found in real scenes ranging from direct sunlight to shadows.
High dynamic range imaging was originally developed in the 1930s and 1940s by Charles Wyckoff. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid 1940s. The process of tone mapping together with bracketed exposures of normal digital images, giving the end result a high, often exaggerated dynamic range, was first reported in 1988 by Zeevi, Ginosar and Hilsenrath.[1] Later introduction in 1993[2] resulted in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[3] In 1997 this technique of combining several differently exposed images to produce a single HDR image was presented to the computer graphics community by Paul Debevec.
This method was developed to produce a high dynamic range image from a set of photographs taken with a range of exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now popularly used[4] to refer to this process. This composite technique is different from (and may be of lesser or greater quality than) the production of an image from a single exposure of a sensor that has a native high dynamic range. Tone mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer screen.
High dynamic range imaging was originally developed in the 1930s and 1940s by Charles Wyckoff. Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid 1940s. The process of tone mapping together with bracketed exposures of normal digital images, giving the end result a high, often exaggerated dynamic range, was first reported in 1988 by Zeevi, Ginosar and Hilsenrath.[1] Later introduction in 1993[2] resulted in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[3] In 1997 this technique of combining several differently exposed images to produce a single HDR image was presented to the computer graphics community by Paul Debevec.
This method was developed to produce a high dynamic range image from a set of photographs taken with a range of exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now popularly used[4] to refer to this process. This composite technique is different from (and may be of lesser or greater quality than) the production of an image from a single exposure of a sensor that has a native high dynamic range. Tone mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer screen.