Accueil > ressources > DOLBY Atmos > Fondamentaux de Dolby Atmos
Fondamentaux de Dolby Atmos
Les principes clefs du fonctionnement Dolby Atmos
A brief history of Dolby Atmos
In 2012 Dolby introduced a new sound experience for movies. For those who saw Disney Pixar’s animated film Brave in Dolby Atmos, the audio presentation went beyond conventional surround sound by providing a truly immersive sonic experience for the listener, with sound coming from above as well as from the front, sides and back. That movie also marked the beginning of a new creative approach for audio content creators.
Today Dolby Atmos has expanded beyond movies into episodic television, music, podcasts, and gaming. Atmos is supported on a wide range of consumer playback devices, from phones, tablets, smart speakers, computers and TVs, to home theater systems with Atmos soundbars or speaker systems. New Dolby Atmos enabled devices are being introduced all the time and Dolby Atmos mixes are being created around the world.
Before diving into creating in Dolby Atmos it is necessary to understand several key concepts.
Surround Sound vs Dolby Atmos
Surround sound has existed in various forms for years, starting with quadrophonic sound, moving into 5.1 and 7.1.
Examples of surround layouts include 5.1 (L, R, C, LFE, Ls, Rs) and 7.1 (L, R, C, LFE, Lss, Rss, Lsr, Rsr). In these formats the first number indicates the number of speaker channels on the horizontal plane surrounding the listener, and the second number is the LFE channel. When mixing in surround, individual elements are panned to specific speaker locations that correspond directly to the specific surround channels.
Dolby Atmos changes that approach and lets the mixer move beyond traditional channel-based mixing, allowing them to place sound anywhere in 3-dimensional space both horizontally and vertically. This creates a sound field that envelops the listener and can translate to multiple playback environments.
Bed and Object Audio
As discussed above, traditional surround mixes are created by panning sounds between channels that correspond to speaker locations. Mixing in Dolby Atmos still allows for this time-tested approach in the form of ‘bed audio’ with the addition of overhead channels (i.e. 7.1.2), the third number corresponding to the number of overhead channels.
Bed audio can be configured from stereo (2.0) to 7.1.2, and a Dolby Atmos mix can have multiple beds of differing widths. There are a variety of use cases for Bed audio depending on the creative process of the mixer, the capabilities of the DAW, and the delivery requirements.
Dolby Atmos introduces the concept of audio Objects. Objects aren’t panned to specific outputs, instead they are panned in 3d space. Object audio is captured along with associated positional metadata, which includes 3d (X,Y,Z) positional coordinates and Object size, which expands the sound field of an Object. This size and positional information is captured as any changes are made in real time and is referred to as Object audio metadata (OAMD). Upon final content delivery the audio is captured along with the positional metadata for playback on consumer devices.
A mixer can choose to create Dolby Atmos content with Beds, Objects, or both, but Objects allow for the most creative expression when creating immersive content.
LFE and Bass Management
The topics of the LFE channel and Bass Management apply to both traditional surround and Dolby Atmos, and are important to understand for those without experience mixing in surround.
The LFE channel has its roots in Cinema, where a separate track was required to carry low frequency audio without overloading full range speakers. As the name implies, it is used for Low Frequency content and is reproduced by a subwoofer. However, the LFE channel is not the only audio reproduced by the subwoofer. Bass management is used in Dolby Atmos and applies a crossover frequency that redirects low frequency content from any channel to the subwoofer.
In Dolby Atmos the LFE channel is addressed using Bed Audio only (Object audio is full range) and is discarded when Dolby Atmos content is played back in stereo or when deriving a stereo re-render. In general, the LFE channel should be used sparingly and only for emphasis of very low frequency audio that could cause overloads when combined with other signals. To ensure low frequency content will be heard in stereo, it is best to leave the low frequency content in other Bed channels or Objects.
Rendering
Dolby Atmos is an adaptive format, meaning that it can be played back on a wide variety of Dolby Atmos enabled devices. The process of reproducing the full Dolby Atmos mix on the playback device is done by an Object Audio Renderer (OAR) and this is referred to as Rendering. Rendering takes place during both Dolby Atmos content creation and consumer playback. For mixers, the sound is rendered to their monitoring environment. For listeners, the sound will be rendered to the specific playback device and available speakers to reproduce the most accurate immersive audio experience true to the mixer’s creative intent.
Binaural Rendering
When creating Dolby Atmos content, the mixer typically monitors over loudspeakers. However, many consumers also listen to Dolby Atmos with headphones or even exclusively with headphones.
Dolby Atmos is an amazing experience over headphones. When creating Dolby Atmos content, mixers can monitor over headphones using the Binaural Renderer. The Binaural Renderer renders all the Bed and Object audio to create a compelling immersive mix over headphones using a head-related transfer function (HRTF). This replicates the experience of listening to immersive audio on speakers as closely as possible while wearing headphones. Dolby adds distance modelling metadata which can be applied to bed and object audio to make the binaural experience even more immersive.
To experience the difference between stereo and binaural Dolby Atmos, click the link below to access the Dolby Atmos Music visualizer which allows you to listen to music and switch between stereo and Dolby Atmos.
https://www.dolby.com/atmos-visualizer-music/
In general, a mix created while monitoring via loudspeakers will reliably translate to the binaural renderer. Alternatively, a mix created on headphones via the binaural renderer ‘can’ translate well to loudspeaker playback, however, it is best practice to always verify a mix over loudspeakers. For the purposes of this training course, you can monitor using loudspeakers or over headphones using the Binaural Renderer.
Re-Renders
Re-Renders are channel-based derivatives of the Atmos master that are generated by the Renderer. Dolby Atmos workflows make it easy to derive a stereo or surround mix for delivery to traditional stereo platforms or streaming services where Atmos is not supported.
Depending on the Atmos content creation tools used, re-renders can be binaural, stereo, or 5.1, all the way up to 9.1.6. In addition to full mixes, re-renders can be a made from stems and specific groups of beds and objects. This will be covered in detail in a later lesson.
Atmos First
Creating in Dolby Atmos is simultaneously future proof and provides backwards compatibility. Dolby Atmos mixes sound amazing on a wide range of Dolby Atmos enabled playback devices. Dolby Atmos mixes also sound great when played back on non-Dolby Atmos enabled playback devices in 5.1 or regular stereo as the format can adapt backwards to older devices and audio systems.
The best results across formats are achieved by working in Dolby Atmos from the beginning and deriving stereo or other surround formats from the Dolby Atmos master, rather than working in stereo or 5.1 first and embellishing the mix later to add immersive elements.
Loudness in Dolby Atmos
With traditional channel-based mixing the average level and peaks requirements have historically been determined by the medium or by industry standards. For stereo music it has been typical in recent years to aim for a mix to be as loud as possible without clipping. Mixers often employ master bus compression and limiting to achieve loudness targets.
Working in Dolby Atmos is different. Dolby Atmos mixes need more headroom and Dolby Atmos delivery employs Dolby codecs that use metadata to ensure proper playback level.
The loudness target specified by streaming services allows for your Dolby Atmos mix to play over speakers, headphones, and in other environments without excessive dynamics processing.
When mixing in Dolby Atmos traditional master bus dynamics processing is not available. Mixing to loudness targets may require attenuation control with a combination of linked compressor/limiters, groups and/or VCAs. This will be covered later in this course.
For mastering Dolby Atmos music, the Dolby Atmos Assembler provides a way to apply post-processing to existing Dolby Atmos master files.
Dolby Atmos Content Creation Tools
There are a range of Dolby Atmos content creation workflows utilizing various standalone DAWs and/or DAWs in conjunction with the Dolby Atmos Renderer application.
Dolby Atmos content creation tools can be divided into different categories and functions. These tools can be combined to provide an optimal workflow that can be tailored to the specific needs and budgets of studios and mixers.
The tools and functions required for creating Dolby Atmos content are :
– DAW software to create (record, edit, mix) immersive content. The DAW software may have the ability to create Dolby Atmos object metadata with a native immersive panner or a Dolby Atmos panning plug-in may be used.
– A Dolby Atmos Renderer for rendering the audio and metadata to the playback system and/or to headphones. This capability is integrated into licensed DAW software or via a connected Dolby Atmos Renderer.
– Software that can record or export a finished Dolby Atmos master file for encoding and distribution. This capability is integrated into licensed DAW software or via a connected Dolby Atmos Renderer.
Synchronization
The Dolby Atmos Renderer application uses SMPTE timecode to synchronize audio and metadata coming from the DAW. Post production mixers are generally already familiar with Longitudinal Timecode (LTC) and working with various frame rates. Mixers coming to Dolby Atmos from music, podcasts, and gaming, may be less familiar with frame rates and LTC. For non-post production applications, 24 fps (frames per second) is used as the standard. An LTC Generator plug-in is supplied with the Dolby Atmos Renderer Application, and the DAW/plug-in and Dolby Atmos Renderer application need to be set to the same frame rate. For DAWs with the ability to export a Dolby Atmos master file the session/project should be set to 24fps but there is no need to generate timecode for the Renderer to chase.
Speaker Placement and Calibration
The topics of Speaker Placement and Calibration are not unique to Dolby Atmos, but take on crucial importance when mixing in speaker layouts beyond stereo. Proper speaker placement, calibration and room tuning are key to an Atmos mix that translates well to consumer playback environments and between Atmos mix facilities. These topics are covered further in Appendix B.
Dolby Atmos Signal Flow
The diagram below provides a quick visual reference to signal flow using the Dolby Atmos Renderer application running internally while monitoring on headphones. Smaller session work and training can be accomplished using a laptop and headphones, however, for larger sessions and finishing work on mixes, speakers should be utilized. These systems will be outlined later in the course.