From Wikipedia, the free encyclopedia - View original article
|It has been suggested that Lip-synching in music be merged into this article. (Discuss) Proposed since July 2014.|
Lip sync, lip-sync, lip-synch (short for lip synchronization) is a technical term for matching lip movements with sung or spoken vocals. The term can refer to any of a number of different techniques and processes, in the context of live performances and recordings.
In the case of live concert performances, lip-synching is a commonly used shortcut, but it can be considered controversial. In film production, lip synching is often part of the post-production phase. Dubbing foreign-language films and making animated characters appear to speak both require elaborate lip-synching. Strategy video games make extensive use of lip-synced sound files to create an immersive environment.
Though lip-synching, also called miming,[according to whom?] can be used to make it appear as though actors have musical ability (e.g., The Partridge Family) or to misattribute vocals (e.g. Milli Vanilli), it is more often used by recording artists to create a particular effect, to enable them to perform live dance numbers, or to cover for illness or other deficiencies during live performance. It is also commonly used in drag shows. Sometimes lip-synching performances are forced by television to shorten the guest appearances of celebrities, as it requires less time for rehearsals and hugely simplifies the process of sound mixing. Some artists, however, lip-synch because they are not as confident singing live and lip-synching can eliminate the possibility of hitting any bad notes. The practice of lip synching during live performances is frowned on by many who view it as a crutch only used by lesser talents.[according to whom?]
Because the film track and music track are recorded separately during the creation of a music video, artists usually lip-sync to their songs and often imitate playing musical instruments as well. Artists also sometimes move their lips at a faster speed from the track, to create videos with a slow-motion effect in the final clip, which is widely considered to be complex to achieve. Similarly, some artists have been known to lip-sync backwards for music videos such that, when reversed, the singer is seen to sing forwards while time appears to move backwards for his or her surroundings.
Michael Jackson's performance on the television special Motown 25: Yesterday, Today, Forever (1983) changed the scope of live stage show. Ian Inglis, author of Performance and Popular Music: History, Place and Time (2006) notes the fact that "Jackson lip-synced 'Billie Jean' is, in itself, not extraordinary, but the fact that it did not change the impact of the performance is extraordinary; whether the performance was live or lip-synced made no difference to the audience." In 1989, a New York Times article claimed that "Bananarama's recent concert at the Palladium", the "first song had a big beat, layered vocal harmonies and a dance move for every line of lyrics", but "the drum kit was untouched until five songs into the set, or that the backup vocals (and, it seemed, some of the lead vocals as well-a hybrid lead performance) were on tape along with the beat". The article also claims that "British band Depeche Mode, ...adds vocals and a few keyboard lines to taped backup onstage" although this practice is common place in the genre of electric music.
Milli Vanilli became one of the most popular pop acts in the late 1980s and early 1990s. The group's debut album Girl You Know It's True achieved international success and earned them a Grammy Award for Best New Artist on February 21, 1990. Their success turned to infamy and failure when the Grammy award was withdrawn after Los Angeles Times author Chuck Philips revealed that lead vocals on the record were not the voices of Morvan and Pilatus.
Ariana Grande is known to have lip synced various parts of her first tour to ensure no damage was caused to her voice due to the use of her soprano vocals, but it was later discovered that she lip synced many of the performances of her single "Problem", including her Radio Disney performance.
Chris Nelson of The New York Times reported that by the 1990s, "[a]rtists like Madonna and Janet Jackson set new standards for showmanship, with concerts that included not only elaborate costumes and precision-timed pyrotechnics but also highly athletic dancing. These effects came at the expense of live singing." Edna Gundersen of USA Today reported: "The most obvious example is Madonna's Blond Ambition World Tour, a visually preoccupied and heavily choreographed spectacle. Madonna lip-syncs the duet "Now I'm Following You", while a Dick Tracy character mouths Warren Beatty's recorded vocals. On other songs, background singers plump up her voice, strained by the exertion of non-stop dancing."
Similarly, in reviewing Janet Jackson's Rhythm Nation World Tour, Michael MacCambridge of the Austin American-Statesman commented "[i]t seemed unlikely that anyone—even a prized member of the First Family of Soul Music—could dance like she did for 90 minutes and still provide the sort of powerful vocals that the '90s super concerts are expected to achieve."
The music video for Electrasy's 1998 single "Morning Afterglow" featured lead singer Alisdair McKinnell lip-syncing the entire song backwards. This allowed the video to create the effect of an apartment being tidied by 'un-knocking over' bookcases, while the music plays forwards.
In 2004, US pop singer Ashlee Simpson appeared on the live comedy TV show Saturday Night Live, and during her performance, "she was revealed to apparently be lip-synching". According to "her manager-father[,]...his daughter needed the help because acid reflux disease had made her voice hoarse." Her manager stated that "Just like any artist in America, she has a backing track that she pushes so you don’t have to hear her croak through a song on national television." During the incident, vocal parts from a previously performed song began to sound while the singer was "holding her microphone at her waist"; she made "some exaggerated hopping dance moves, then walked off the stage".
During the 2008 Beijing Olympics, CTV news reported that a "nine-year-old Chinese girl's stunning performance at the Beijing Olympics opening ceremony has been marred by revelations she was lip-synching". The article states that "Lin Miaoke was lip-synching Friday to a version of "Ode to the Motherland" sung by seven-year-old Yang Peiyi, who was deemed not pretty enough to perform as China's representative".
During Super Bowl XLIII, "Jennifer Hudson's flawless performance of the national anthem" was "lip-synched ...to a previously recorded track, and apparently so did Faith Hill who performed before her". The singers lip-synched "...at the request of Rickey Minor, the pregame show producer", who argued that "There's too many variables to go live." Subsequent Super Bowl national anthems were performed live.
Teenage viral video star Keenan Cahill lip-syncs popular songs on his YouTube channel. His popularity has increased as he included guests such as rapper 50 Cent in November 2010 and David Guetta in January 2011, sending him to be one of the most popular channels on YouTube in January 2011.
In 1981 Wm. Randy Wood started lip sync contests at the Underground Nightclub in Seattle, Washington to attract customers. The contests were so popular he took the contests nationwide. By 1984 he had contests running in over 20 cities. The contests were so successful Mr Wood went to work for Dick Clark Productions as consulting producer for the TV series Puttin' on the Hits. The show received an impressive 9.0 rating the first season and was nominated twice for the Daytime Emmy Awards. In the United States, this hobby reached its peak during the 1980s, when several game shows, such as Puttin' on the Hits and Lip Service, were created. The Family Channel had a Saturday morning show called Great Pretenders where kids lip-synched their favorite songs.
In film production, lip synching is often part of the post-production phase. Most film today contains scenes where the dialogue has been re-recorded afterwards; lip-synching is the technique used when animated characters speak; and lip synching is essential when films are dubbed into other languages. In many musical films, actors sang their own songs beforehand in a recording session and lip-synched during filming, but many also lip-synched to voices other than their own. Marni Nixon sang for Deborah Kerr in The King and I, Annette Warren for Ava Gardner in Show Boat, Robert McFerrin for Sidney Poitier in Porgy and Bess, Betty Wand for Leslie Caron in Gigi, Lisa Kirk for Rosalind Russell in Gypsy, and Bill Lee for Christopher Plummer in The Sound of Music. In the 1950s MGM classic Singin' in the Rain, lip synching is a major plot point.
Automated dialogue replacement, also known as "ADR" or "looping," is a film sound technique involving the re-recording of dialogue after photography. Sometimes the dialogue recorded on location is unsatisfactory either because it has too much background noise on it or the director is not happy with the performance, so the actors replace their own voices in a "looping" session after the filming.
Another manifestation of lip synching is the art of making an animated character appear to speak in a prerecorded track of dialogue. The lip sync technique to make an animated character appear to speak involves figuring out the timings of the speech (breakdown) as well as the actual animating of the lips/mouth to match the dialogue track. The earliest examples of lip-sync in animation were attempted by Max Fleischer in his 1926 short My Old Kentucky Home. The technique continues to this day, with animated films and television shows such as Shrek, Lilo & Stitch, and The Simpsons using lip-synching to make their artificial characters talk. Lip synching is also used in comedies such as This Hour Has 22 Minutes and political satire, changing totally or just partially the original wording. It has been used in conjunction with translation of films from one language to another, for example, Spirited Away. Lip-synching can be a very difficult issue in translating foreign works to a domestic release, as a simple translation of the lines often leaves overrun or underrun of high dialog to mouth movements.
Quality film dubbing requires that the dialogue is first translated in such a way that the words used can match the lip movements of the actor. This is often hard to achieve if the translation is to stay true to the original dialogue. Elaborate lip-synch of dubbing is also a lengthy and expensive process. The more simplified non-phonetic representation of mouth movement in many anime helps this process.
In English-speaking countries, many foreign TV series (especially anime like Pokémon) are dubbed for television broadcast. However, cinematic releases of films tend to come with subtitles instead. The same is true of countries in which the local language is not spoken widely enough to make the expensive dubbing commercially viable (in other words, there is not enough market for it).
However, most non-English-speaking countries with a large enough population dub all foreign films into their national language cinematic release. In such countries, people are accustomed to dubbed films, so less than optimal matches between the lip movements and the voice are not generally noticed. Dubbing is preferred by some because it allows the viewer to focus on the on-screen action, without reading the subtitles.
Early video games did not use any voice sounds, due to technical limitations. In the 1970s and early 1980s, most video games used simple electronic sounds such as bleeps and simulated explosion sounds. At most, these games featured some generic jaw or mouth movement to convey a communication process in addition to text. However, as games become more advanced in the 1990s and 2000s, lip sync and voice acting has become a major focus of many games.
Lip sync was for some time a minor focus in role-playing video games. Because of the amount of information conveyed through the game, the majority of communication uses of scrolling text. Older RPGs rely solely on text, using inanimate portraits to provide a sense of who is speaking. Some games make use of voice acting, such as Grandia II or Diablo, but due to simple character models, there is no mouth movement to simulate speech. RPGs for hand-held systems are still largely based on text, with the rare use of lip sync and voice files being reserved for full motion video cutscenes. Newer RPGs, have extensive audio dialogues. The Neverwinter Nights series are examples of transitional games where important dialogue and cutscenes are fully voiced, but less important information is still conveyed in text. In games such as Jade Empire and Knights of the Old Republic, developers created partial artificial languages to give the impression of full voice acting without having to actually voice all dialogue.
Unlike RPGs, strategy video games make extensive use of sound files to create an immersive battle environment. Most games simply played a recorded audio track on cue with some games providing inanimate portraits to accompany the respective voice. StarCraft used full motion video character portraits with several generic speaking animations that did not synchronize with the lines spoken in the game. The game did, however, make extensive use of recorded speech to convey the game's plot, with the speaking animations providing a good idea of the flow of the conversation. Warcraft III used fully rendered 3D models to animate speech with generic mouth movements, both as character portraits as well as the in-game units. Like the FMV portraits, the 3D models did not synchronize with actual spoken text, while in-game models tended to simulate speech by moving their heads and arms rather than using actual lip synchronization. Similarly, the game Codename Panzers uses camera angles and hand movements to simulate speech, as the characters have no actual mouth movement. However StarCraft II used fully synced unit portraits and cinematic sequences.
FPS is a genre that generally places much more emphasis on graphical display, mainly due to the camera almost always being very close to character models. Due to increasingly detailed character models requiring animation, FPS developers assign many resources to create realistic lip synchronization with the many lines of speech used in most FPS games. Early 3D models used basic up-and-down jaw movements to simulate speech. As technology progressed, mouth movements began to closely resemble real human speech movements. Medal of Honor: Frontline dedicated a development team to lip sync alone, producing the most accurate lip synchronization for games at that time. Since then, games like Medal of Honor: Pacific Assault and Half-Life 2 have made use of coding that dynamically simulates mouth movements to produce sounds as if they were spoken by a live person, resulting in astoundingly lifelike characters. Gamers who create their own videos using character models with no lip movements, such as the helmeted Master Chief from Halo, improvise lip movements by moving the characters' arms, bodies and making a bobbing movement with the head (see Red vs. Blue).
An example of a lip synchronization problem, also known as lip sync error is the case in which television video and audio signals are transported via different facilities (e.g., a geosynchronous satellite radio link and a landline) that have significantly different delay times. In such cases it is necessary to delay the earlier of the two signals electronically.
Lip sync issues have become a serious problem for the television industry world wide. Lip sync problems are not only annoying, but can lead to subconscious viewer stress which in turn leads to viewer dislike of the television program they are watching. Television industry standards organizations have become involved in setting standards for lip sync errors.
The miming of the playing of a musical instrument is equivalent of lip-synching.[according to whom?] A notable example of miming includes John Williams' piece at President Obama’s inauguration, which was a recording made two days earlier and mimed by musicians Yo-Yo Ma, Itzhak Perlman. The musicians wore earpieces to hear the playback.