From Wikipedia, the free encyclopedia - View original article
The origin of language in the human species has been the topic of scholarly discussions for several centuries. In spite of this, there is no consensus on its ultimate origin or age. One problem that makes the topic difficult to study is the lack of direct evidence. Consequently, scholars wishing to study the origins of language must draw inferences from other kinds of evidence such as the fossil record or from archaeological evidence, from contemporary language diversity, from studies of language acquisition, and from comparisons between human language and systems of communication existing among other animals, particularly other primates. It is generally agreed that the origins of language are closely tied to the origins of modern human behavior, but there is little agreement about the implications and directionality of this connection.
This shortage of empirical evidence has led many scholars to regard the entire topic as unsuitable for serious study. In 1866, the Linguistic Society of Paris went so far as to ban debates on the subject, a prohibition which remained influential across much of the western world until late in the twentieth century. Today, there are numerous hypotheses about how, why, when, and where language might first have emerged. It might seem that there is hardly more agreement today than there was a hundred years ago, when Charles Darwin's theory of evolution by natural selection provoked a rash of armchair speculations on the topic. Since the early 1990s, however, a growing number of professional linguists, archaeologists, psychologists, anthropologists, and others have attempted to address with new methods what they are beginning to consider "the hardest problem in science".
Approaches to the origin of language can be divided according to their underlying assumptions. "Continuity theories" are based on the idea that language is so complex that one cannot imagine it simply appearing from nothing in its final form: it must have evolved from earlier pre-linguistic systems among our primate ancestors. "Discontinuity theories" are based on the opposite idea — that language is a unique trait so it cannot be compared to anything found among non-humans and must therefore have appeared fairly suddenly during the course of human evolution. Another contrast is between theories that see language mostly as an innate faculty that is largely genetically encoded, and those that see it as a system that is mainly cultural — that is, learned through social interaction.
Noam Chomsky is a prominent proponent of discontinuity theory. "The views of Noam Chomsky on the nature of UG (innate universal grammar) have long been dominant within the field of linguistics, but they themselves have undergone marked changes from decade to decade" (Christiansen, 59). He argues that a single chance mutation occurred in one individual on the order of 100,000 years ago, triggering the "instantaneous" emergence of the language faculty (a component of the mind-brain) in "perfect" or "near-perfect" form. The philosophical argument runs, briefly, as follows: firstly, from what is known about evolution, any biological change in a species arises by a random genetic change in a single individual which spreads throughout its breeding group. Secondly, from a computational perspective on the theory of language: the only change that was needed was the cognitive ability to construct and process recursive data structures in the mind (the property of "discrete infinity", which appears to be unique to the human mind). This genetic change, which endowed the human mind with the property of discrete infinity, Chomsky argues, essentially amounts to a jump from being able to count up to N, where N is a fixed number, to being able to count indefinitely (i.e. if N can be constructed then so can N+1). It follows from these assertions that the evolution of the human language faculty is saltational since, as a matter of logical fact, there is no way to gradually transition from a mind capable only of counting up to a fixed number, to a mind capable of counting indefinitely. The picture then, by loose analogy, is that the formation of the language faculty in humans is akin to the formation of a crystal; discrete infinity was the seed crystal in a super-saturated primate brain, on the verge of blossoming into the human mind, by physical law, once a single small, but crucial, key stone was added by evolution. It thus follows from this theory that language did appear rather suddenly within the history of human evolution. From this, one can argue that the emergence of language was developed, not so much innately within the human psyche, but from an earlier trait that was present in the primate brain.
Continuity based theories are currently held by a majority of scholars, but they vary in how they envision this development. Among those who see language as being mostly innate, some — notably Steven Pinker — avoid speculating about specific precursors in nonhuman primates, stressing simply that the language faculty must have evolved in the usual gradualistic way. Others in this intellectual camp — notably Ib Ulbæk — hold that language evolved not from primate communication but from primate cognition, which is significantly more complex. Those who see language as a socially learned tool of communication, such as Michael Tomasello, see it developing from the cognitively controlled aspects of primate communication, these being mostly gestural as opposed to vocal. Where vocal precursors are concerned, many continuity theorists envisage language evolving from early human capacities for song.
Transcending the continuity-versus-discontinuity divide are those who view the emergence of language as the consequence of some kind of social transformation that, by generating unprecedented levels of public trust, liberated a genetic potential for linguistic creativity that had previously lain dormant. 'Ritual/speech coevolution theory' is an example of this approach. Scholars in this intellectual camp point to the fact that even chimpanzees and bonobos have latent symbolic capacities that, in the wild, they rarely if ever use. The argument goes that if a mutation were to arise which abruptly enabled language capabilities in an individual, the mutation would be maladaptive in this stage for said individual because language would serve the sole purpose of giving away information without receiving fitness benefits. Therefore, a very specific social structure (such as very high genetic relatedness) must have evolved before or concurrently with language to make the development of language evolutionarily adaptive for individuals.
Because the emergence of language is located so far back in human prehistory, the relevant developments have left no direct historical traces; nor can comparable processes be observed today. Despite this, the emergence of new sign languages in modern times — Nicaraguan Sign Language, for example — might potentially offer insights into the developmental stages and creative processes necessarily involved. Another approach has been to inspect early human fossils, looking for traces of physical adaptation to language use. In some cases, when the DNA of extinct humans can be recovered, the presence or absence of supposedly language-relevant genes — FOXP2 is an example — might prove informative. Another approach, this time archeological, is to invoke symbolic behaviour (such as repeated ritual activity) that may leave an archaeological trace—such as mining and modification of ochre pigments for body-painting—while developing theoretical arguments to justify inferences from symbolism in general to language in particular.
The time range for the evolution of language and/or its anatomical prerequisites extends, at least in principle, from the phylogenetic divergence of Homo (2.3 to 2.4 million years ago) from Pan (5 to 6 million years ago) to the emergence of full behavioral modernity some 150,000 - 50,000 years ago. Few dispute that Australopithecus probably lacked vocal communication significantly more sophisticated than that of great apes in general, but scholarly opinions vary as to the developments since the appearance of Homo some 2.5 million years ago. Some scholars assume the development of primitive language-like systems (proto-language) as early as Homo habilis, while others place the development of symbolic communication only with Homo erectus (1.8 million years ago) or Homo heidelbergensis (0.6 million years ago) and the development of language proper with Homo sapiens less than 200,000 years ago.
Using statistical methods to estimate the time required to achieve the current spread and diversity in modern languages today, Johanna Nichols — a linguist at the University of California, Berkeley — argued in 1998 that vocal languages must have begun diversifying in our species at least 100,000 years ago. Using phonemic diversity, a more recent analysis offers directly linguistic support for a similar date. Estimates of this kind are independently supported by genetic, archaeological, palaeontological and much other evidence suggesting that language probably emerged somewhere in sub-Saharan Africa during the Middle Stone Age, roughly contemporaneous with the speciation of Homo sapiens.
Linguists now agree that, apart from such things as pidgins, there are no "primitive" languages: all modern human populations speak languages of comparable expressive power, though much recent scholarship has explored how linguistic complexity varies between and within languages over historical time. This was a serious debate in contemporary linguistics, being challenged up until the early 21st century (Everett 2005). The current consensus that no modern languages are primitive is the latest major change in linguistic approaches to language.
I cannot doubt that language owes its origin to the imitation and modification, aided by signs and gestures, of various natural sounds, the voices of other animals, and man’s own instinctive cries.
— Charles Darwin, 1871. The Descent of Man, and Selection in Relation to Sex.
In 1861, historical linguist Max Müller published a list of speculative theories concerning the origins of spoken language:
Most scholars today consider all such theories not so much wrong—they occasionally offer peripheral insights—as comically naïve and irrelevant. The problem with these theories is that they are so narrowly mechanistic. They assume that once our ancestors had stumbled upon the appropriate ingenious mechanism for linking sounds with meanings, language automatically evolved and changed.
From the perspective of modern science, the main obstacle to the evolution of language-like communication in nature is not a mechanistic one. Rather, it is the fact that symbols — arbitrary associations of sounds or other perceptible forms with corresponding meanings — are unreliable and may well be false. As the saying goes, 'words are cheap.' The problem of reliability was not recognised at all by Darwin, Müller or the other early evolutionist theorists.
Animal vocal signals are for the most part intrinsically reliable. When a cat purrs, the signal constitutes direct evidence of the animal's contented state. We can 'trust' the signal not because the cat is inclined to be honest, but because it just can't fake that sound. Primate vocal calls may be slightly more manipulable, but they remain reliable for the same reason — because they are hard to fake. Primate social intelligence is Machiavellian—self-serving and unconstrained by moral scruples. Monkeys and apes often attempt to deceive one another, while at the same time remaining constantly on guard against falling victim to deception themselves. Paradoxically, it is precisely primates' resistance to deception that blocks the evolution of their signalling systems along language-like lines. Language is ruled out because the best way to guard against being deceived is to ignore all signals except those that are instantly verifiable. Words automatically fail this test.
Words are easy to fake. Should they turn out to be lies, listeners will adapt by ignoring them in favour of hard-to-fake indices or cues. For language to work, then, listeners must be confident that those with whom they are on speaking terms are generally likely to be honest. A peculiar feature of language is 'displaced reference', which means reference to topics outside the currently perceptible situation. This property prevents utterances from being corroborated in the immediate 'here' and 'now'. For this reason, language presupposes relatively high levels of mutual trust in order to become established over time as an evolutionarily stable strategy. This stability is born of the longstanding mutual trust and is what grants language its authority. A theory of the origins of language must therefore explain why humans could begin trusting cheap signals in ways that other animals apparently cannot (see signalling theory).
The 'mother tongues' hypothesis was proposed in 2004 as a possible solution to this problem. W. Tecumseh Fitch suggested that the Darwinian principle of 'kin selection' — the convergence of genetic interests between relatives — might be part of the answer. Fitch suggests that languages were originally 'mother tongues'. If language evolved initially for communication between mothers and their own biological offspring, extending later to include adult relatives as well, the interests of speakers and listeners would have tended to coincide. Fitch argues that shared genetic interests would have led to sufficient trust and cooperation for intrinsically unreliable signals — words — to become accepted as trustworthy and so begin evolving for the first time.
Critics of this theory point out that kin selection is not unique to humans. Ape mothers also share genes with their offspring, as do all animals, so why is it only humans who speak? Furthermore, it is difficult to believe that early humans restricted linguistic communication to genetic kin: the incest taboo must have forced men and women to interact and communicate with non-kin." Species often rely on verbal and nonverbal forms of communication, such as calls; non-vocal auditory outbursts, like the slap of a dolphin's tail on the water; bioluminescence; scent marking; chemical or tactile cues; visual signals and postural gestures" (Toothman). So even if we accept Fitch's initial premises, the extension of the posited 'mother tongue' networks from relatives to non-relatives remains unexplained. Fitch argues, however, that the extended period of physical immaturity of human infants, and the extrauterine development in human encephalisation gives the human-infant relationship a different and more extended period of intergenerational dependency than that found in any other species.
Ib Ulbæk invokes another standard Darwinian principle — 'reciprocal altruism' — to explain the unusually high levels of intentional honesty necessary for language to evolve. 'Reciprocal altruism' can be expressed as the principle that if you scratch my back, I'll scratch yours. In linguistic terms, it would mean that if you speak truthfully to me, I'll speak truthfully to you. Ordinary Darwinian reciprocal altruism, Ulbæk points out, is a relationship established between frequently interacting individuals. For language to prevail across an entire community, however, the necessary reciprocity would have needed to be enforced universally instead of being left to individual choice. Ulbæk concludes that for language to evolve, early society as a whole must have been subject to moral regulation. The evolution of such reciprocal altruism, and the prisoner's dilemma problem associated with free riders and defection, has been used to explain the rapid increase in socially driven encephalisation associated with the transition from Australopithecus to anchaic Homo sapiens.
Critics point out that this theory fails to explain when, how, why or by whom 'obligatory reciprocal altruism' could possibly have been enforced. Various proposals have been offered to remedy this defect. A further criticism is that language doesn't work on the basis of reciprocal altruism anyway. Humans in conversational groups don't withhold information to all except listeners likely to offer valuable information in return. On the contrary, they seem to want to advertise to the world their access to socially relevant information, broadcasting it to anyone who will listen without thought of return.
Gossip, according to Robin Dunbar, does for group-living humans what manual grooming does for other primates — it allows individuals to service their relationships and so maintain their alliances on the basis of the principle, if you scratch my back, I'll scratch yours. As humans began living in larger and larger social groups, the task of manually grooming all one's friends and acquaintances became so time-consuming as to be unaffordable. In response to this problem, humans invented 'a cheap and ultra-efficient form of grooming' — vocal grooming. To keep your allies happy, you now needed only to 'groom' them with low-cost vocal sounds, servicing multiple allies simultaneously while keeping both hands free for other tasks. Vocal grooming then evolved gradually into vocal language — initially in the form of 'gossip'.
Critics of this theory point out that the very efficiency of 'vocal grooming' — the fact that words are so cheap — would have undermined its capacity to signal commitment of the kind conveyed by time-consuming and costly manual grooming. A further criticism is that the theory does nothing to explain the crucial transition from vocal grooming — the production of pleasing but meaningless sounds — to the cognitive complexities of syntactical speech. This criticism, however, assumes that from vocal grooming to vocal language there exists some complex middle step so to speak. The former criticism also seems to assume a not so apparent superiority of physical grooming over vocal grooming in saying that it lacks the same capacity to signal commitment. For example, studies which have shown a child's affinity for a mother's voice might suggest manual grooming as not having a fixed hierarchical advantage above vocal grooming.
The ritual/speech coevolution theory was originally proposed by the social anthropologist Roy Rappaport before being elaborated by anthropologists such as Chris Knight, Jerome Lewis, Nick Enfield, Camilla Power and Ian Watts. Cognitive scientist and robotics engineer Luc Steels is another prominent supporter of this general approach, as is biological anthropologist/neuroscientist Terrence Deacon.
These scholars argue that there can be no such thing as a 'theory of the origins of language'. This is because language is not a separate adaptation but an internal aspect of something much wider — namely, human symbolic culture as a whole. Attempts to explain language independently of this wider context have spectacularly failed, say these scientists, because they are addressing a problem with no solution. Can we imagine a historian attempting to explain the emergence of credit cards independently of the wider system of which they are a part? Using a credit card makes sense only if you have a bank account institutionally recognised within a certain kind of advanced capitalist society—one where electronic communications technology and digital computers have already been invented and fraud can be detected and prevented. In much the same way, language would not work outside a specific array of social mechanisms and institutions. For example, it would not work for an ape communicating with other apes in the wild. Not even the cleverest ape could make language work under such conditions.
Lie and alternative, inherent in language...pose problems to any society whose structure is founded on language, which is to say all human societies. I have therefore argued that if there are to be words at all it is necessary to establish The Word, and that The Word is established by the invariance of liturgy.
— Roy Rappaport, 1979. Ecology, Meaning and Religion, pp. 210-11.
Advocates of this school of thought point out that words are cheap. As digital hallucinations, they are intrinsically unreliable. Should an especially clever ape, or even a group of articulate apes, try to use words in the wild, they would carry no conviction. The primate vocalisations that do carry conviction—those they actually use—are unlike words, in that they are emotionally expressive, intrinsically meaningful and reliable because they are relatively costly and hard to fake.
Language consists of digital contrasts whose cost is essentially zero. As pure social conventions, signals of this kind cannot evolve in a Darwinian social world — they are a theoretical impossibility. Being intrinsically unreliable, language works only if you can build up a reputation for trustworthiness within a certain kind of society — namely, one where symbolic cultural facts (sometimes called 'institutional facts') can be established and maintained through collective social endorsement. In any hunter-gatherer society, the basic mechanism for establishing trust in symbolic cultural facts is collective ritual. Therefore, the task facing researchers into the origins of language is more multidisciplinary than is usually supposed. It involves addressing the evolutionary emergence of human symbolic culture as a whole, with language an important but subsidiary component.
Critics of the theory include Noam Chomsky, who terms it the 'non-existence' hypothesis — a denial of the very existence of language as an object of study for natural science. Chomsky's own theory is that language emerged in an instant and in perfect form, prompting his critics in turn to retort that only something that doesn't exist—a theoretical construct or convenient scientific fiction—could possibly emerge in such a miraculous way. The controversy remains unresolved.
It has been suggested that language might have evolved partly to block communication, to set one's own tribe aside from contamination from the others. This is connected with the Code-talker paradox, the Tower of Babel story, and is not inconsistent with the mother-tongue, grooming within the tribe, and incest avoidance hypotheses described above.
The gestural theory states that human language developed from gestures that were used for simple communication.
Two types of evidence support this theory.
Research has found strong support for the idea that verbal language and sign language depend on similar neural structures. Patients who used sign language, and who suffered from a left-hemisphere lesion, showed the same disorders with their sign language as vocal patients did with their oral language. Other researchers found that the same left-hemisphere brain regions were active during sign language as during the use of vocal or written language.
The important question for gestural theories is why there was a shift to vocalization. Various explanations have been proposed:
Humans still use hand and facial gestures when they speak, especially when people meet who have no language in common. There are also, of course, a great number of sign languages still in existence, commonly associated with Deaf communities; it is important to note that these sign languages are equal in complexity, sophistication, and expressive power, to any oral language—the cognitive functions are similar and the parts of the brain used are similar. The main difference is that the "phonemes" are produced on the outside of the body, articulated with hands, body, and facial expression, rather than inside the body articulated with tongue, teeth, lips, and breathing.
Critics of gestural theory note that it is difficult to name serious reasons why the initial pitch-based vocal communication (which is present in primates) would be abandoned in favour of the much less effective non-vocal, gestural communication. However, Michael Corballis has pointed out that primate vocal communication (such as alarm calls) cannot be controlled consciously, unlike hand movement, and thus is not credible as precursor to human language; primate vocalisation is rather homologous to and continued in involuntary reflexes (connected with basic human emotions) such as screams or laughter (the fact that these can be faked does not disprove the fact that genuine involuntary responses to fear or surprise exist). Also, gesture is not generally less effective, and depending on the situation can even be advantageous, for example in a loud environment or where it is important to be silent, such as on a hunt. Other challenges to the "gesture-first" theory have been presented by researchers in psycholinguistics, including David McNeill.
In humans, functional MRI studies have reported finding areas homologous to the monkey mirror neuron system in the inferior frontal cortex, close to Broca's area, one of the hypothesized language regions of the brain. This has led to suggestions that human language evolved from a gesture performance/understanding system implemented in mirror neurons. Mirror neurons have been said to have the potential to provide a mechanism for action-understanding, imitation-learning, and the simulation of other people's behaviour. This hypothesis is supported by some cytoarchitectonic homologies between monkey premotor area F5 and human Broca's area. Rates of vocabulary expansion link to the ability of children to vocally mirror non-words and so to acquire the new word pronunciations. Such speech repetition occurs automatically, quickly and separately in the brain to speech perception. Moreover such vocal imitation can occur without comprehension such as in speech shadowing and echolalia.
Further evidence for this link comes from a recent study in which the brain activity of two participants was measured using fMRI while they were gesturing words to each other using hand gestures with a game of charades – a modality that some have suggested might represent the evolutionary precursor of human language. Analysis of the data using Granger Causality revealed that the mirror-neuron system of the observer indeed reflects the pattern of activity of the activity in the motor system of the sender, supporting the idea that the motor concept associated with the words is indeed transmitted from one brain to another using the mirror system.
It must be noticed that the mirror neuron system seems to be inherently inadequate to play any role in syntax, given that this definitory property of human languages which is implemented in hierarchical recursive structure is flattened into linear sequences of phonemes making the recursive structure not accessible to sensory detection.
According to Dean Falk's 'putting the baby down' theory, vocal interactions between early hominin mothers and infants sparked a sequence of events that led, eventually, to our ancestors' earliest words. The basic idea is that evolving human mothers, unlike their monkey and ape counterparts, couldn't move around and forage with their infants clinging onto their backs. Loss of fur in the human case left infants with no means of clinging on. Frequently, therefore, mothers had to put their babies down. As a result, these babies needed to be reassured that they were not being abandoned. Mothers responded by developing 'motherese' – an infant-directed communicative system embracing facial expressions, body language, touching, patting, caressing, laughter, tickling and emotionally expressive contact calls. The argument is that language somehow developed out of all this.
Critics note that while this theory may explain a certain kind of infant-directed 'protolanguage' – known today as 'motherese' – it does little to solve the really difficult problem, which is the emergence among adults of syntactical speech.
However, in The Mental and Social Life of Babies, psychologist Kenneth Kaye noted that no usable adult language could have evolved without interactive communication between very young children and adults. "No symbolic system could have survived from one generation to the next if it could not have been easily acquired by young children under their normal conditions of social life."
'Grammaticalisation' is a continuous historical process in which free-standing words develop into grammatical appendages, while these in turn become ever more specialised and grammatical. An initially 'incorrect' usage, in becoming accepted, leads to unforeseen consequences, triggering knock-on effects and extended sequences of change. Paradoxically, grammar evolves because, in the final analysis, humans care less about grammatical niceties than about making themselves understood. If this is how grammar evolves today, according to this school of thought, we can legitimately infer similar principles at work among our distant ancestors, when grammar itself was first being established.
In order to reconstruct the evolutionary transition from early language to languages with complex grammars, we need to know which hypothetical sequences are plausible and which are not. In order to convey abstract ideas, the first recourse of speakers is to fall back on immediately recognisable concrete imagery, very often deploying metaphors rooted in shared bodily experience. A familiar example is the use of concrete terms such as 'belly ' or 'back' to convey abstract meanings such as 'inside' or 'behind'. Equally metaphorical is the strategy of representing temporal patterns on the model of spatial ones. Hence in English we say 'It is going to rain', modelled on 'I am going to London'. We might abbreviate this colloquially to 'It's gonna rain'. Even when in a hurry, we don't say 'I'm gonna London' – the contraction is restricted to the job of specifying tense. From such examples we can see why grammaticalisation is consistently unidirectional – from concrete to abstract meaning, not the other way around.
Grammaticalisation theorists picture early language as simple, perhaps consisting only of nouns. Even under that extreme theoretical assumption, however, it is difficult to imagine what cognitive inhibition would realistically have prevented people from using – say – 'spear' as if it were a verb, as we do in English ('Let's spear this pig!'). Irrespective of the niceties of grammar as professional linguists understand it, people in real life would surely have used their nouns as verbs or their verbs as nouns as occasion demanded. In short, while a noun-only language might seem theoretically possible, grammaticalisation theory indicates that it cannot have remained fixed in that state for any length of time.
Creativity drives grammatical change. This presupposes a certain attitude on the part of listeners. Instead of punishing deviations from accepted usage, listeners must prioritise imaginative mind-reading. We shouldn't take for granted that cognitive stance. Imaginative creativity – emitting a leopard alarm when no leopard was present, for example – is not the kind of behaviour which vervet monkeys would appreciate or reward. Creativity and reliability are incompatible demands; for 'Machiavellian' primates as for animals generally, the overriding pressure is to demonstrate reliability. If humans escape these constraints, it is because in our case, listeners are primarily interested in mental states.
To focus on mental states is to accept fictions – inhabitants of the imagination – as potentially informative and interesting. Take the use of metaphor. A metaphor is, literally, a false statement. Think of Romeo's declaration, 'Juliet is the sun!'. Juliet is a woman, not a ball of hot gases in the sky, but human listeners are not (or not usually) pedants insistent on point-by-point factual accuracy. They want to know what the speaker has in mind. Grammaticalisation is essentially based on metaphor. To outlaw its use would be to stop grammar from evolving and, by the same token, to exclude all possibility of expressing abstract thought.
A criticism of all this is that while grammaticalisation theory might explain language change today, it doesn't satisfactorily address the really difficult challenge – explaining the initial transition from primate-style communication to language as we know it. Rather, the theory assumes that language already exists. As Bernd Heine and Tania Kuteva acknowledge: "Grammaticalization requires a linguistic system that is used regularly and frequently within a community of speakers and is passed on from one group of speakers to another". Outside modern humans, such conditions don't prevail.
According to a study investigating the song differences between white-rumped Munias and its domesticated counterpart (Bengalese finch), the wild munias use a highly stereotyped song sequence, whereas the domesticated ones sing a highly unconstrained song. In wild finches, song syntax is subject to female preference - sexual selection - and remains relatively fixed. However, in the Bengalese finch, natural selection is replaced by breeding, in this case for colorful plumage, and thus, decoupled from selective pressures, stereotyped song syntax is allowed to drift. It is replaced, within 1000 generations, by a variable and learned sequence. Wild finches, moreover, are incapable of learning song sequences from other finches. In the field of bird vocalization, brains capable of producing only an innate song have very simple neural pathways: the primary forebrain motor center, called the robust nucleus of arcopallium, connects to midbrain vocal outputs, which in turn project to brainstem motor nuclei. By contrast, in brains capable of learning songs, the arcopallium receives input from numerous additional forebrain regions, including those involved in learning and social experience. Control over song generation has become less constrained, more distributed, and more flexible.
When compared with other primates, whose communication system is restricted to a highly stereotypic repertoire of hoots and calls, humans have very few prespecified vocalizations, extant examples being laughter and sobbing. Moreover, these remaining innate vocalizations are generated by restricted neuronal pathways, whereas language is generated by a highly distributed system involving numerous regions of the human brain.
A salient feature of language is that while language competency is inherited, the languages themselves are transmitted via culture. Also transmitted via culture are understandings, such as technological ways of doing things, that are framed as language-based explanations. Hence one would expect a robust co-evolutionary trajectory between language competency and culture: proto-humans capable of the first, and presumably rudimentary, versions of protolanguage would have better access to cultural understandings, and cultural understandings, conveyed in protolanguages that children's brains could readily learn, were more likely to be transmitted, thereby conferring the benefits accrued.
Hence proto-humans indubitably engaged in, and continue to engage in, what is called niche construction, creating cultural niches that provide understandings key to survival, and undergoing evolutionary changes that optimize their ability to flourish in such niches. Selection pressures that operated to sustain instincts important for survival in prior niches would be expected to relax as humans became increasingly dependent on their self-created cultural niches, while any innovations that facilitated cultural adaptation—in this case, innovations in language competency—would be expected to spread.
One useful way to think about human evolution is that we are self-domesticated apes. Just as domestication relaxed selection for stereotypic songs in the finches—mate choice was supplanted by choices made by the aesthetic sensibilities of bird breeders and their customers—so might our cultural domestication have relaxed selection on many of our primate behavioral traits, allowing old pathways to degenerate and reconfigure. Given the highly indeterminate way that mammalian brains develop—they basically construct themselves "bottom up", with one set of neuronal interactions setting the stage for the next round of interactions—degraded pathways would tend to seek out and find new opportunities for synaptic hookups. Such inherited de-differentiations of brain pathways might have contributed to the functional complexity that characterizes human language. And, as exemplified by the finches, such de-differentiations can occur in very rapid timeframes.
A distinction can be drawn between speech and language. Language is not necessarily spoken: it might alternatively be written or signed. Speech is one among a number of different methods of encoding and transmitting linguistic information, albeit arguably the most natural one.
Some scholars view language as initially a cognitive development, its 'externalisaton' to serve communicative purposes occurring later in human evolution. According to one such school of thought, the key feature distinguishing human language is recursion. – in this context, the iterative embedding of phrases within phrases. Other scholars – notably Daniel Everett – deny that recursion is universal, citing certain languages (e.g. Pirahã) which allegedly lack this feature.
The ability to ask questions is considered by some to distinguish language from nonhuman systems of communication. Some captive primates (notably bonobos and chimpanzees), having learned to use rudimentary signing to communicate with their human trainers, proved able to respond correctly to complex questions and requests. Yet they failed to ask even the simplest questions themselves. Conversely, human children are able to ask their first questions (using only question intonation) at the babbling period of their development, long before they start using syntactic structures. Although babies from different cultures acquire native languages from their social environment, all languages of the world without exception – tonal, non-tonal, intonational and accented – use similar rising "question intonation" for yes–no questions. This fact is a strong evidence of the universality of question intonation.
One of the intriguing abilities that language users have is that of high-level reference, or the ability to refer to things or states of being that are not in the immediate realm of the speaker. This ability is often related to theory of mind, or an awareness of the other as a being like the self with individual wants and intentions. According to Chomsky, Hauser and Fitch (2002), there are six main aspects of this high-level reference system:
Simon Baron-Cohen (1999) argues that theory of mind must have preceded language use, based on evidence of use of the following characteristics as much as 40,000 years ago: intentional communication, repairing failed communication, teaching, intentional persuasion, intentional deception, building shared plans and goals, intentional sharing of focus or topic, and pretending. Moreover, Baron-Cohen argues that many primates show some, but not all, of these abilities. Call and Tomasello’s research on chimpanzees supports this, in that individual chimps seem to understand that other chimps have awareness, knowledge, and intention, but do not seem to understand false beliefs. Many primates show some tendencies toward a theory of mind, but not a full one as humans have. Ultimately, there is some consensus within the field that a theory of mind is necessary for language use. Thus, the development of a full theory of mind in humans was a necessary precursor to full language use.
In one particular study, rats and pigeons were required to press a button a certain number of times to get food: The animals showed very accurate distinction for numbers less than four, but as the numbers increased, the error rate increased (Chomsky, Hauser & Fitch, 2002). Matsuzawa (1985) attempted to teach chimpanzees Arabic numerals. The difference between primates and humans in this regard was very large, as it took the chimps thousands of trials to learn 1-9 with each number requiring a similar amount of training time; yet, after learning the meaning of 1, 2 and 3 (and sometimes 4), children easily comprehend the value of greater integers by using a successor function (i.e. 2 is 1 greater than 1, 3 is 1 greater than 2, 4 is 1 greater than 3; once 4 is reached it seems most children have an "a-ha!" moment and understand that the value any integer n is 1 greater than the previous integer). Put simply, other primates learn the meaning of numbers one by one similar to their approach to other referential symbols while children first learn an arbitrary list of symbols (1,2,3,4...) and then later learn their precise meanings. These results can be seen as evidence for the application of the "open-ended generative property" of language in human numeral cognition.
Hockett (1966) details a list of features regarded as essential to describing human language. In the domain of the lexical-phonological principle, two features of this list are most important:
The sound system of a language is composed of a finite set of simple phonological items. Under the specific phonotactic rules of a given language, these items can be recombined and concatenated, giving rise to morphology and the open-ended lexicon. A key feature of language is that a simple, finite set of phonological items gives rise to an infinite lexical system wherein rules determine the form of each item, and meaning is inextricably linked with form. Phonological syntax, then, is a simple combination of pre-existing phonological units. Related to this is another essential feature of human language: lexical syntax, wherein pre-existing units are combined, giving rise to semantically novel or distinct lexical items.
Certain elements of the lexical-phonological principle are known to exist outside of humans. While all (or nearly all) have been documented in some form in the natural world, very few co-exist within the same species. Birdsong, singing apes, and the songs of whales all display phonological syntax, combining units of sound into larger structures devoid of enhanced or novel meaning. Certain species of primate do have simple phonological systems with units referring to entities in the world. However, in contrast to human systems, the units in these primates' systems normally occur in isolation, betraying a lack of lexical syntax. There is new evidence to suggest that Campbell's monkeys also display lexical syntax, combining two calls (a predator alarm call with a "boom", the combination of which denotes a lessened threat of danger), however it is still unclear whether this is a lexical or a morphological phenomenon.
Pidgins are significantly simplified languages with only rudimentary grammar and a restricted vocabulary. In their early stage pidgins mainly consist of nouns, verbs, and adjectives with few or no articles, prepositions, conjunctions or auxiliary verbs. Often the grammar has no fixed word order and the words have no inflection.
If contact is maintained between the groups speaking the pidgin for long periods of time, the pidgins may become more complex over many generations. If the children of one generation adopt the pidgin as their native language it develops into a creole language, which becomes fixed and acquires a more complex grammar, with fixed phonology, syntax, morphology, and syntactic embedding. The syntax and morphology of such languages may often have local innovations not obviously derived from any of the parent languages.
Studies of creole languages around the world have suggested that they display remarkable similarities in grammar and are developed uniformly from pidgins in a single generation. These similarities are apparent even when creoles do not share any common language origins. In addition, creoles share similarities despite being developed in isolation from each other. Syntactic similarities include subject–verb–object word order. Even when creoles are derived from languages with a different word order they often develop the SVO word order. Creoles tend to have similar usage patterns for definite and indefinite articles, and similar movement rules for phrase structures even when the parent languages do not.
Field primatologists can give us useful insights into great ape communication in the wild. The main finding is that non-human primates, including the great apes, produce calls that are graded, as opposed to categorically differentiated, with listeners striving to evaluate subtle gradations in signallers' emotional and bodily states. Apes find it extremely difficult to produce vocalisations in the absence of the corresponding emotional states. In captivity, apes have been taught rudimentary forms of sign language or have been persuaded to use lexigrams—symbols that do not graphically resemble the corresponding words—on computer keyboards. Some apes, such as Kanzi, have been able to learn and use hundreds of lexigrams.
The Broca's and Wernicke's areas in the primate brain are responsible for controlling the muscles of the face, tongue, mouth, and larynx, as well as recognizing sounds. Primates are known to make "vocal calls", and these calls are generated by circuits in the brainstem and limbic system. However, modern brain scans of chattering chimpanzees prove that they use Brocas area to chatter  and there is evidence that monkeys hearing monkey chatter use the same brain regions as humans hearing speech.
In the wild, the communication of vervet monkeys has been the most extensively studied. They are known to make up to ten different vocalizations. Many of these are used to warn other members of the group about approaching predators. They include a "leopard call", a "snake call", and an "eagle call". Each call triggers a different defensive strategy in the monkeys that hear the call and scientists were able to elicit predictable responses from the monkeys using loudspeakers and prerecorded sounds. Other vocalizations may be used for identification. If an infant monkey calls, its mother turns toward it, but other vervet mothers turn instead toward that infant's mother to see what she will do.
Similarly, researchers have demonstrated that chimpanzees (in captivity) use different "words" in reference to different foods. They recorded vocalizations that chimps made in reference, for example, to grapes, and then other chimps pointed at pictures of grapes when they heard the recorded sound.
Regarding articulation, there is considerable speculation about the language capabilities of early Homo (2.5 to 0.8 million years ago). Anatomically, some scholars believe features of bipedalism, which developed in australopithecines around 3.5 million years ago, would have brought changes to the skull, allowing for a more L-shaped vocal tract. The shape of the tract and a larynx positioned relatively low in the neck are necessary prerequisites for many of the sounds humans make, particularly vowels. Other scholars believe that, based on the position of the larynx, not even Neanderthals had the anatomy necessary to produce the full range of sounds modern humans make. It was earlier proposed that differences between Homo sapiens and Neanderthal vocal tracts could be seen in fossils, but the finding that the Neanderthal hyoid bone (see below) was identical to that found in Homo sapiens, has weakened these theories. Still another view considers the lowering of the larynx as irrelevant to the development of speech.
The term proto-language, as defined by linguist Derek Bickerton, is a primitive form of communication lacking:
That is, a stage in the evolution of language somewhere between great ape language and fully developed modern human language. Bickerton (2009) places the first emergence of such a proto-language with the earliest appearance of Homo, and associates its appearance with the pressure of behavioral adaptation to the niche construction of scavenging faced by Homo habilis.
Anatomical features such as the L-shaped vocal tract have been continuously evolving, as opposed to appearing suddenly. Hence it is most likely that Homo habilis and Homo erectus during the Lower Pleistocene had some form of communication intermediate between that of modern humans and that of other primates.
Steven Mithen proposed the term Hmmmmm for the pre-linguistic system of communication used by archaic Homo, beginning with Homo ergaster and reaching the highest sophistification in the Middle Pleistocene with Homo heidelbergensis and Homo neanderthalensis. Hmmmmm is an acronym for holistic (non-compositional), manipulative (utterances are commands or suggestions, not descriptive statements), multi-modal (acoustic as well as gestural and mimetic), musical, and mimetic.
H. heidelbergensis was a close relative (most probably a migratory descendant) of Homo ergaster. H. ergaster some researchers believe this species to be the first hominid to make controlled vocalizations, possibly mimicking animal vocalizations, and that as H. heidelbergensis developed more sophisticated culture proceeded from this point and possibly developed an early form of symbolic language.
The discovery in 2007 of a Neanderthal hyoid bone suggests that Neanderthals may have been anatomically capable of producing sounds similar to modern humans. The hypoglossal nerve, which passes through the canal, controls the movements of the tongue and its size is said to reflect speech abilities. Hominids who lived earlier than 300,000 years ago had hypoglossal canals more akin to those of chimpanzees than of humans.
However, although Neanderthals may have been anatomically able to speak, Richard G. Klein in 2004 doubted that they possessed a fully modern language. He largely bases his doubts on the fossil record of archaic humans and their stone tool kit. For 2 million years following the emergence of Homo habilis, the stone tool technology of hominids changed very little. Klein, who has worked extensively on ancient stone tools, describes the crude stone tool kit of archaic humans as impossible to break down into categories based on their function, and reports that Neanderthals seem to have had little concern for the final form of their tools. Klein argues that the Neanderthal brain may have not reached the level of complexity required for modern speech, even if the physical apparatus for speech production was well-developed. The issue of the Neanderthal's level of cultural and technological sophistication remains a controversial one.
Anatomically modern humans first appear in the fossil record 195,000 years ago in Ethiopia. But while they were modern anatomically, the archaeological evidence available leaves little indication that they behaved any differently from the earlier Homo heidelbergensis. They retained the same Acheulean stone tools and hunted less efficiently than did modern humans of the Late Pleistocene. The transition to the more sophisticated Mousterian takes place only about 120,000 years ago, and is shared by both H. sapiens and H. neanderthalensis.
The development of fully modern behavior in H. sapiens, not shared by H. neanderthalensis or any other variety of Homo, is dated to some 70,000 to 50,000 years ago.
The development of more sophisticated tools, for the first time constructed out of more than one material (e.g. bone or antler) and sortable into different categories of function (such as projectile points, engraving tools, knife blades, and drilling and piercing tools), is often taken as proof for the presence of fully developed language, assumed to be necessary for the teaching of the processes of manufacture to offspring.
The greatest step[dubious ] in language evolution would have been the progression from primitive, pidgin-like communication to a creole-like language with all the grammar and syntax of modern languages.
Some scholars believe that this step could only have been accomplished with some biological change to the brain, such as a mutation. It has been suggested that a gene such as FOXP2 may have undergone a mutation allowing humans to communicate.[dubious ] However, recent genetic studies have shown that Neanderthals shared the same FOXP2 allele with H. sapiens. It hence does not have a mutation unique to H. sapiens. Instead, it indicates this genetic change predates the Neanderthal - H. sapiens split.
There is still considerable debate as to whether language developed gradually over thousands of years or whether it appeared suddenly.
The Broca's and Wernicke's areas of the primate brain also appear in the human brain, the first area being involved in many cognitive and perceptual tasks, the latter lending to language skills. The same circuits discussed in the primates' brain stem and limbic system control non-verbal sounds in humans (laughing, crying, etc.), which suggests that the human language center is a modification of neural circuits common to all primates. This modification and its skill for linguistic communication seem to be unique to humans, which implies that the language organ derived after the human lineage split from the primate (chimps and bonobos) lineage.
According to the Out of Africa hypothesis, around 50,000 years ago a group of humans left Africa and proceeded to inhabit the rest of the world, including Australia and the Americas, which had never been populated by archaic hominids. Some scientists believe that Homo sapiens did not leave Africa before that, because they had not yet attained modern cognition and language, and consequently lacked the skills or the numbers required to migrate. However, given the fact that Homo erectus managed to leave the continent much earlier (without extensive use of language, sophisticated tools, nor anatomical modernity), the reasons why anatomically modern humans remained in Africa for such a long period remain unclear.
Linguistic monogenesis is the hypothesis that there was a single proto-language, sometimes called Proto-Human, from which all other vocal languages spoken by humans descend. (This does not apply to sign languages, which are known to arise independently rather frequently.) If the assumption of a "Proto-Human" language is accepted, its date may be set anywhere between 200,000 years ago (the age of Homo sapiens) and 50,000 years ago (the age of behavioral modernity).
The first serious scientific attempt to establish the reality of monogenesis was that of Alfredo Trombetti, in his book L'unità d'origine del linguaggio, published in 1905 (cf. Ruhlen 1994:263). Trombetti estimated that the common ancestor of existing languages had been spoken between 100,000 and 200,000 years ago (1922:315).
Monogenesis was dismissed by many linguists in the late 19th and early 20th centuries, when the doctrine of the polygenesis of the human races and their languages held the ascendancy (e.g. Saussure 1986/1916:190).
The best-known supporter of monogenesis in America in the mid-20th century was Morris Swadesh (cf. Ruhlen 1994:215). He pioneered two important methods for investigating deep relationships between languages, lexicostatistics and glottochronology.
The multiregional hypothesis would entail that modern language evolved independently on all the continents, a proposition considered implausible by proponents of monogenesis. The hypothesis holds that humans first arose near the beginning of the Pleistocene two million years ago and subsequent human evolution has been within a single, continuous human species. This species encompasses archaic human forms such as Homo erectus and Neanderthals as well as modern forms, and evolved worldwide to the diverse populations of modern Homo sapiens sapiens. The theory contends that humans evolve through a combination of adaptation within various regions of the world and gene flow between those regions. Proponents of multiregional origin point to fossil and genomic data and continuity of archaeological cultures as support for their hypothesis.
The descended larynx was formerly viewed as a structure unique to the human vocal tract and essential to the development of speech and language. However, it has been found in other species, including some aquatic mammals and large deer (e.g. Red Deer), and the larynx has been observed to descend during vocalizations in dogs, goats, and alligators. In humans, the descended larynx extends the length of the vocal tract and expands the variety of sounds humans can produce. Some scholars claim that the ubiquity of nonverbal communication in humans stands as evidence of the non-essentiality of the descended larynx to the development of language.
The descended larynx has non-linguistic functions as well, possibly exaggerating the apparent size of an animal (through vocalizations with lower than expected pitch). Thus, although it plays an important role in speech production, expanding the variety of sounds humans can produce, it may not have evolved specifically for this purpose, as has been suggested by Jeffrey Laitman, and as per Hauser, Chomsky, and Fitch (2002), could be an example of preadaptation.
The search for the origin of language has a long history rooted in mythology. Most mythologies do not credit humans with the invention of language but speak of a divine language predating human language. Mystical languages used to communicate with animals or spirits, such as the language of the birds, are also common, and were of particular interest during the Renaissance.
Vāc is the Hindu goddess of speech, or "speech personified". As brahman "sacred utterance", she has a cosmological role as the "Mother of the Vedas".The Aztecs' story maintains that only a man, Coxcox, and a woman, Xochiquetzal, survive, having floated on a piece of bark. They found themselves on land and begot many children who were at first born unable to speak, but subsequently, upon the arrival of a dove were endowed with language, although each one was given a different speech such that they could not understand one another.
Such sources of mysticism can be understood as having developed in analogue with the notion that individual's fates were tethered to the whims of the gods, nature, etc. Historically language was considered to be something bequeathed from divinity in the same way as crops were over-lorded by benevolent yet mercurial gods. As the mystery behind how crops grew up, for example, disappeared with technological advance so too did the notion of language as given by divinity slowly dissipate.
History contains a number of anecdotes about people who attempted to discover the origin of language by experiment. The first such tale was told by Herodotus (Histories 2.2). He relates that Pharaoh Psammetichus (probably Psammetichus I, 7th century BC) had two children raised by a shepherd, with the instructions that no one should speak to them, but that the shepherd should feed and care for them while listening to determine their first words. When one of the children cried "bekos" with outstretched arms the shepherd concluded that the word was Phrygian because that was the sound of the Phrygian word for bread. From this Psammetichus concluded that the first language was Phrygian. King James V of Scotland is said to have tried a similar experiment: his children were supposed to have spoken Hebrew. Both the medieval monarch Frederick II and Akbar are said to have tried similar experiments; the children involved in these experiments did not speak.
Late 18th to early 19th century European scholarship assumed that the languages of the world reflected various stages in the development from primitive to advanced speech, culminating in the Indo-European languages, seen as the most advanced.
Modern linguistics does not begin until the late 18th century, and the Romantic or animist theses of Johann Gottfried Herder and Johann Christoph Adelung remained influential well into the 19th century. The question of language origins seemed inaccessible to methodical approaches, and in 1866 the Linguistic Society of Paris famously banned all discussion of the origin of language, deeming it to be an unanswerable problem. An increasingly systematic approach to historical linguistics developed in the course of the 19th century, reaching its culmination in the Neogrammarian school of Karl Brugmann and others.
However, scholarly interest in the question of the origin of language has only gradually been rekindled from the 1950s on (and then controversially) with ideas such as universal grammar, mass comparison and glottochronology.
The "origin of language" as a subject in its own right emerged out of studies in neurolinguistics, psycholinguistics and human evolution. The Linguistic Bibliography introduced "Origin of language" as a separate heading in 1988, as a sub-topic of psycholinguistics. Dedicated research institutes of evolutionary linguistics are a recent phenomenon, emerging only in the 1990s.
Beginning in 1979, the recently installed Nicaraguan government initiated the country's first widespread effort to educate deaf children. Prior to this there was no deaf community in the country. A center for special education established a program initially attended by 50 young deaf children. By 1983 the center had 400 students. The center did not have access to teaching facilities of any of the sign languages that are used around the world; consequently, the children were not taught any sign language. The language program instead emphasized spoken Spanish and lipreading, and the use of signs by teachers limited to fingerspelling (using simple signs to sign the alphabet). The program achieved little success, with most students failing to grasp the concept of Spanish words.
The first children who arrived at the center came with only a few crude gestural signs developed within their own families. However, when the children were placed together for the first time they began to build on one another's signs. As more and younger children joined, the language became more complex. The children's teachers, who were having limited success at communicating with their students, watched in awe as the children began communicating amongst themselves.
Later the Nicaraguan government solicited help from Judy Kegl, an American sign-language expert at Northeastern University. As Kegl and other researchers began to analyze the language, they noticed that the younger children had taken the pidgin-like form of the older children to a higher level of complexity, with verb agreement and other conventions of grammar.
|url=missing title (help)..
|url=missing title (help).
|Look up glottogony in Wiktionary, the free dictionary.|