High dynamic range imaging

From Wikipedia, the free encyclopedia - View original article

External images
Schweitzer at the Lamp, by W. Eugene Smith[21][22]
  (Redirected from HDRI)
Jump to: navigation, search
Example of HDR image including images that were used for its creation.
HDR photography of Leukbach, a river of Saarburg, Germany.
High Dynamic Range (HDR) image made out of three pictures. Taken in Tronador, Argentina.

High Dynamic Range Imaging (HDRI or HDR) is a set of methods used in imaging and photography to allow a greater dynamic range between the lightest and darkest areas of an image than current standard digital imaging methods or photographic methods. HDR images can represent more accurately the range of intensity levels found in real scenes, from direct sunlight to faint starlight, and is often captured by way of a plurality of differently exposed pictures of the same subject matter.[1][2][3]

In simpler terms, HDR is a range of methods to provide higher dynamic range from the imaging process. Non-HDR cameras take pictures at one exposure level with a limited contrast range. This results in the loss of detail in bright or dark areas of a picture, depending on whether the camera had a low or high exposure setting. HDR compensates for this loss of detail by taking multiple pictures at different exposure levels and intelligently stitching them together to produce a picture that is representative in both dark and bright areas.

HDR is also commonly used to refer to display of images derived from HDR imaging in a way that exaggerates contrast for artistic effect. The two main sources of HDR images are computer renderings and merging of multiple low-dynamic-range (LDR)[4] or standard-dynamic-range (SDR)[5] photographs. Tone mapping methods, which reduce overall contrast to facilitate display of HDR images on devices with lower dynamic range, can be applied to produce images with preserved or exaggerated local contrast for artistic effect.



In photography, dynamic range is measured in EV differences (known as stops) between the brightest and darkest parts of the image that show detail. An increase of one EV or one stop is a doubling of the amount of light.

Dynamic ranges of common devices
LCD display9.5700:1 (250:1 – 1750:1)
DSLR camera (Canon EOS-1D Mark II)12[6]4096:1
Negative film (Kodak Vision3)13[7]8192:1
Human eye10–14[8]1024:1 – 16384:1

High-dynamic-range photographs are generally achieved by capturing multiple standard photographs, often using exposure bracketing, and then merging them into an HDR image. Digital photographs are often encoded in a camera's raw image format, because 8 bit JPEG encoding doesn't offer enough values to allow fine transitions (and introduces undesirable effects due to the lossy compression).

Any camera that allows manual over- or under-exposure of a photo can be used to create HDR images. This includes film cameras, though the images may need to be digitized for processing with software HDR methods.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II.[9] As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode which captures an HDR image and then outputs (only) a tone mapped JPEG file.[10] The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.[11] Even some smartphones now include HDR modes, and most platforms have apps that provide HDR picture taking.[12]


Of all imaging tasks, image editing demands the highest dynamic range. Editing operations need high precision to avoid aliasing artifacts such as banding and jaggies. Photoshop users are familiar with the issues of low dynamic range today. With 8 bit channels, if you brighten an image, information is lost irretrievably: darkening the image after brightening does not restore the original appearance. Instead, all of the highlights appear flat and washed out. One must work in a carefully planned work-flow to avoid this problem.

Scanning film

In contrast to digital photographs, color negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range.[13]

Dynamic ranges of photographic material
MaterialDynamic range (F stops)Object contrast
photograph print51:32
color negative81:256
positive slide121:4096

When digitizing photographic material with an image scanner, the scanner must be able to capture the whole dynamic range of the original, or details are lost. The manufacturer's declarations concerning the dynamic range of flatbed and film scanners are often slightly inaccurate and exaggerated.[citation needed]

Despite color negative having less dynamic range than slide, it actually captures considerably more dynamic range of the scene than does slide film. This dynamic range is simply compressed considerably.

Representing HDR images on LDR displays

Camera characteristics

The characteristics of a camera need to be taken into account when reconstructing high dynamic range images. These characteristics are mainly related to gamma curves, sensor resolution, and noise.[14]

Camera calibration

Camera calibration can be divided into three aspects: geometric calibration, photometric calibration and spectral calibration. For HDR reconstruction, the important aspects are photometric and spectral calibrations.[14]

Color reproduction

Due to the fact that it is human perception of color rather than color per se that is important in color reproduction, light sensors and emitters try to render and manipulate a scene's light signal in such a way as to mimic human perception of color. Based on the trichromatic nature of the human eye, the standard solution adopted by industry is to use red, green, and blue filters, referred to as RGB, to sample the input light signal and to reproduce the signal using light-based image emitters. This employs an additive color model, as opposed to the subtractive color model used with printers, paintings etc.

Photographic color films usually have three layers of emulsion, each with a different spectral curve, sensitive to red, green, and blue light, respectively. The RGB spectral response of the film is characterized by spectral sensitivity and spectral dye density curves.[15]

Contrast reduction

HDR images can easily be represented on common LDR devices, such as computer monitors and photographic prints, by simply reducing the contrast, just as all image editing software is capable of doing.

Clipping and compressing dynamic range

An example of a rendering of an HDRI tone mapped image in a New York City nighttime cityscape.

Scenes with high dynamic ranges are often represented on LDR devices by cropping the dynamic range, cutting off the darkest and brightest details, or alternatively with an S-shaped conversion curve that compresses contrast progressively and more aggressively in the highlights and shadows while leaving the middle portions of the contrast range relatively unaffected.

Tone mapping

Tone mapping reduces the dynamic range, or contrast ratio, of the entire image, while retaining localized contrast (between neighboring pixels), tapping into research on how the human eye and visual cortex perceive a scene, trying to represent the whole dynamic range while retaining realistic color and contrast.

Images with too much tone mapping processing have their range over-compressed, creating a surreal low-dynamic-range rendering of a high-dynamic-range scene.

Comparison with traditional digital images

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors that should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.[16][17][18]

HDR images often do not use fixed ranges per color channel - other than traditional images - in order to represent many more colors over a much wider dynamic range. For that purpose, they do not user integer values to represent the single color channels (e.g. 0..255 in an 8 bit per pixel interval for red, green and blue) but a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.[16][19]

History of HDR photography

LeGray brick.jpg

The idea of using several exposures to fix a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, the luminosity range being too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.[20]

Mid-twentieth century
External images
Schweitzer at the Lamp, by W. Eugene Smith[21][22]

Mid-twentieth century, manual tone mapping was particularly done using dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This is effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to produce, in order to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.[22]

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which features dodging and burning prominently, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible, due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response over the years, or shot in black and white to use tone mapping methods.

Exposure/Density Characteristics of Wyckoff's Extended Exposure Response Film

Film capable of directly recording high dynamic range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force".[23] This XR film had three layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color.[24] The dynamic range of this extended range film has been estimated as 1:108.[25] It has been used to photograph nuclear explosions,[26] for astronomical photography,[27] for spectrographic research,[28] and for medical imaging.[29] Wyckoff's detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.


The desirability of HDR has been recognized for decades, but its wider usage was, until quite recently, precluded by the limitations imposed by the available computer processing power. Probably the first practical application of HDRI was by the movie industry in late 1980s and, in 1985, Gregory Ward created the Radiance RGBE image format which was the first HDR imaging file format.

The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.[30] In 1993 the first commercial medical camera was introduced that performed real time capturing of multiple images with different exposures, and producing an HDR video image, by the same group.[31]

Modern HDR imaging uses a completely different approach, based on making a high dynamic range luminance or light map using only global image operations (across the entire image), and then tone mapping this result. Global HDR was first introduced in 1993[1] resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.[2]

This method was developed to produce a high dynamic range image from a set of photographs taken with a range of exposures. With the rising popularity of digital cameras and easy-to-use desktop software, the term HDR is now popularly used to refer to this process. This composite method is different from (and may be of lesser or greater quality than) the production of an image from one exposure of a sensor that has a native high dynamic range. Tone mapping is also used to display HDR images on devices with a low native dynamic range, such as a computer screen.


The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory.[32] Mann's method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann's process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.[32]


This method of combining several differently exposed images to produce one HDR image was presented to the public by Paul Debevec.


Photoshop CS2 introduced the Merge to HDR function, 32 bit floating point image support for HDR images, and HDR tone mapping for conversion of HDR images to LDR.[33]


Sorry, your browser either has JavaScript disabled or does not have any supported player.
You can download the clip or download a player to play the clip in your browser.
Example of HDR time-lapse video

While custom high-dynamic-range digital video solutions had been developed for industrial manufacturing during the 1980s, it was not until the early 2000s that several scholarly research efforts used consumer-grade sensors and cameras.[34] A few companies such as RED[35] and Arri[36] have been developing digital sensors capable of a higher dynamic range. RED EPIC-X can capture HDRx images with a user selectable 1-3 stops of additional highlight latitude in the 'x' channel. The 'x' channel can be merged with the normal channel in post production software. With the advent of low cost consumer digital cameras, many amateurs began posting tone mapped HDR time-lapse videos on the Internet, essentially a sequence of still photographs in quick succession. In 2010 the independent studio Soviet Montage produced an example of HDR video from disparately exposed video streams using a beam splitter and consumer grade HD video cameras.[37] Similar methods have been described in the academic literature in 2001[38] and 2007.[39]

Modern movies have often been filmed with cameras featuring a higher dynamic range, and legacy movies can be upgraded even if manual intervention would be needed for some frames (as this happened in the past with black&white films’ upgrade to color). Also, special effects, especially those in which real and synthetic footage are seamlessly mixed, require both HDR shooting and rendering. HDR video is also needed in all applications in which capturing temporal aspects of changes in the scene demands high accuracy. This is especially important in monitoring of some industrial processes such as welding, predictive driver assistance systems in automotive industry, surveillance systems, to name just a few possible applications. HDR video can be also considered to speed up the image acquisition in all applications, in which a large number of static HDR images are needed, as for example in image-based methods in computer graphics. Finally, with the spread of TV sets featuring enhanced dynamic range, broadcasting HDR video will be important, but may take a long time to actually occur due to standardization issues. For this particular application, enhancing current low dynamic range rendering (LDR) video signal to HDR by intelligent TV sets seems to be a more viable near-term solution.[40]


These are examples of four standard dynamic range images that are combined to produce two resulting tone mapped images.

Raw material
Results after processing

These are examples of two standard dynamic range images that are combined to produce a resulting tone mapped image.

Raw material
Result after processing


See also


  1. ^ a b "Compositing Multiple Pictures of the Same Scene", by Steve Mann, in IS&T's 46th Annual Conference, Cambridge, Massachusetts, May 9–14, 1993
  2. ^ a b S. Mann, R. W. Picard. "On Being ‘Undigital’ With Digital Cameras: Extending Dynamic Range By Combining Differently Exposed Pictures". http://wearcam.org/is_t95_myversion.pdf.
  3. ^ Reinhard, Erik; Ward, Greg; Pattanaik, Sumanta; Debevec, Paul (2006). High dynamic range imaging: acquisition, display, and image-based lighting. Amsterdam: Elsevier/Morgan Kaufmann. p. 7. ISBN 978-0-12-585263-0. "Images that store a depiction of the scene in a range of intensities commensurate with the scene are what we call HDR, or "radiance maps". On the other hand, we call images suitable for display with current display technology LDR."
  4. ^ Cohen, Jonathan and Tchou, Chris and Hawkins, Tim and Debevec, Paul E. (2001). Steven Jacob Gortler and Karol Myszkowski. ed. "Real-Time High Dynammic Range Texture Mapping". Proceedings of the 12th Eurographics Workshop on Rendering Techniques (Springer): 313–320. ISBN 3-211-83709-4.
  5. ^ Vassilios Vonikakis, Ioannis Andreadis (2008). "Fast automatic compensation of under/over-exposured image regions". In Domingo Mery and Luis Rueda. Advances in image and video technology: Second Pacific Rim Symposium (PSIVT) 2007, Santiago, Chile, December 17–19, 2007. p. 510. ISBN 978-3-540-77128-9. http://books.google.com/?id=vkNfw8SsU3oC&pg=PA510&dq=hdr+sdr+%22standard+dynamic+range%22&q=hdr%20sdr%20%22standard%20dynamic%20range%22.
  6. ^ DxO Labs. "5d Mk iii vs Mk ii". http://www.dxomark.com/index.php/Publications/DxOMark-Reviews/Canon-5D-Mark-III-Review/Sensor-performance.
  7. ^ "Dynamic Range". http://motion.kodak.com/motion/About/The_Storyboard/17788/index.htm.
  8. ^ "Dynamic Range in Digital Photography". http://www.cambridgeincolour.com/tutorials/dynamic-range.htm. Retrieved 2010-12-30.
  9. ^ "Auto Exposure Bracketing by camera model". http://hdr-photography.com/aeb.html. Retrieved 18 August 2009.
  10. ^ "The Pentax K-7: The era of in-camera High Dynamic Range Imaging has arrived!". http://www.adorama.com/alc/blogarticle/11608. Retrieved 18 August 2009.
  11. ^ "Canon PowerShot G12 picks up HD video recording, built-in HDR". http://www.digitaltrends.com/photography/cameras/canon-powershot-g12-picks-up-hd-video-recording-built-in-hdr/?news=123.
  12. ^ "Apple – iPhone 4S – Shoot amazing photos with the 8MP camera.". http://www.apple.com/iphone/built-in-apps/camera.html. Retrieved 18 November 2011.
  13. ^ "Learn about Dynamic Range". photo.net. http://photo.net/learn/drange/. Retrieved 2011-10-19.
  14. ^ a b Asla M. Sá, Paulo Cezar Carvalho, Luiz Velho (2007). High Dynamic Range (First ed.). Focal Press. p. 11. ISBN 978-1-59829-562-7. http://books.google.com/?id=mDsFgWPhWWYC&printsec=frontcover&dq=ISBN:+9781598295627#v=onepage&q=ISBN%3A%209781598295627&f=.
  15. ^ Asla M. S´a, Paulo Cezar Carvalho, Luiz Velho (2007). High Dynamic Range (First ed.). Focal Press. p. 22. ISBN 978-1-59829-562-7. http://books.google.com/?id=mDsFgWPhWWYC&printsec=frontcover&dq=ISBN:+9781598295627#v=onepage&q=ISBN%3A%209781598295627&f=false.
  16. ^ a b Greg Ward, Anyhere Software. "High Dynamic Range Image Encodings". http://www.anyhere.com/gward/hdrenc/hdr_encodings.html.
  17. ^ "The Radiance Picture File Format". http://radsite.lbl.gov/radiance/refer/Notes/picture_format.html. Retrieved 2009-08-21.
  18. ^ Fernando, Randima (2004). "26.5 Linear Pixel Values". GPU Gems. Boston: Addison-Wesley. ISBN 0-321-22832-4. http://http.developer.nvidia.com/GPUGems/gpugems_ch26.html.
  19. ^ Max Planck Institute for Computer Science. "Perception-motivated High Dynamic Range Video Encoding". http://www.mpi-sb.mpg.de/resources/hdrvideo/.
  20. ^ J. Paul Getty Museum. Gustave Le Gray, Photographer. July 9 – September 29, 2002. Retrieved September 14, 2008.
  21. ^ The Future of Digital Imaging – High Dynamic Range Photography, Jon Meyer, Feb 2004
  22. ^ a b 4.209: The Art and Science of Depiction, Frédo Durand and Julie Dorsey, Limitations of the Medium: Compensation and accentuation – The Contrast is Limited, lecture of Monday, April 9. 2001, slide 57–59; image on slide 57, depiction of dodging and burning on slide 58
  23. ^ US 3450536, Wyckoff, Charles W. & EG&G Inc., assignee, "Silver Halide Photographic Film having Increased Exposure-response Characteristics", published March 24, 1961, issued June 17, 1969 
  24. ^ C. W. Wyckoff. Experimental extended exposure response film. Society of Photographic Instrumentation Engineers Newsletter, June–July, 1962, pp. 16-20.
  25. ^ Michael Goesele, et al., "High Dynamic Range Techniques in Graphics: from Acquisition to Display", Eurographics 2005 Tutorial T7
  26. ^ The Militarily Critical Technologies List (1998), pages II-5-100 and II-5-107.
  27. ^ Andrew T. Young and Harold Boeschenstein, Jr., Isotherms in the region of Proclus at a phase angle of 9.8 degrees, Scientific Report No. 5, Harvard, College Observatory: Cambridge, Massachusetts, 1964.
  28. ^ Bryant, R. L.; Troup, G. J.; Turner, R. G. (1965). "The use of a high intensity-range photographic film for recording extended diffraction patterns and for spectrographic work". Journal of Scientific Instruments 42 (2): 116. doi:10.1088/0950-7671/42/2/315.
  29. ^ Eber, Leslie M.; Greenberg, Haervey M.; Cooke, John M.; Gorlin, Richard (1969). "Dynamic Changes in Left Ventricular Free Wall Thickness in the Human Heart". Circulation 39 (4): 455–464. doi:10.1161/01.CIR.39.4.455.
  30. ^ US application 5144442, Ginosar, R., Hilsenrath, O., Zeevi, Y., "Wide dynamic range camera", published 1992-09-01 
  31. ^ Technion – Israel Institute of Technology (1993). Adaptive Sensitivity. http://visl.technion.ac.il/research/isight/AS/.
  32. ^ a b US application 5828793, Steve Mann, "Method and apparatus for producing digital images having extended dynamic ranges", published 1998-10-27 
  33. ^ "Merge to HDR in Photoshop CS2". http://www.luminous-landscape.com/tutorials/hdr.shtml. Retrieved 2009-08-27.
  34. ^ Kang, Sing Bing; Uyttendaele, Matthew; Winder, Simon; Szeliski, Richard (2003). "High dynamic range video". ACM SIGGRAPH 2003 Papers – on SIGGRAPH '03. pp. 319–25. doi:10.1145/1201775.882270. ISBN 1-58113-709-5.
  35. ^ https://www.red.com/epic_scarlet/[dead link]
  36. ^ http://www.arridigital.com/alexa[dead link]
  37. ^ "HDR video accomplished using dual 5D Mark IIs, is exactly what it sounds like". Engadget. http://www.engadget.com/2010/09/09/hdr-video-accomplished-using-dual-5d-mark-iis-is-exactly-what-i/.
  38. ^ "A Real Time High Dynamic Range Light Probe". http://gl.ict.usc.edu/Research/rtlp/.
  39. ^ McGuire, Morgan; Matusik, Wojciech; Pfister, Hanspeter; Chen, Billy; Hughes, John; Nayar, Shree (2007). "Optical Splitting Trees for High-Precision Monocular Imaging". IEEE Computer Graphics and Applications 27 (2): 32–42. doi:10.1109/MCG.2007.45. PMID 17388201.
  40. ^ Karol Myszkowski, Rafal Mantiuk, and Grzegorz Krawczyk (2008). High Dynamic Range Video (First ed.). Morgan & Claypool. p. 20. ISBN 978-1-59829-215-2. http://books.google.com/?id=PVPggnBIC-wC&printsec=frontcover&dq=ISBN:+9781598292145#v=onepage&q=ISBN%3A%209781598292145&f=false.

External links