Larissa Cayobit

136
BY: L ARISSA B. CAYOBIT ABCOM III

Transcript of Larissa Cayobit

[CHAPTER ONE/ORGANS ON SPEECH] 2

B Y : L A R I S S A B . C A Y O B I T A B C O M I I I

Table of Contents

Rationale _________________________________________ 2

Aims/outcomes/duration ________________________________________ 3

Chapter 1 Organs in Speech ______________________________________ 4

Overview ___________________________________________________ 5

1.1 Speech organs, how important? ___________________________________ 6

1.2 Speech Circuitry _________________________________________ 9

1.3 The Vocal Folds _________________________________________ 10

1.4 Descriptions/functions of organs __________________________________ 13

1.5 Competencies _________________________________________ 16

Chapter 2 Phonetics and Phonology _______________________________ 17

Overview ___________________________________________________ 18

2.1 Phonology _______________________________________________ 19

2.2 Diff. of Phonology and Phonetics __________________________________ 21

2.3 Competencies _______________________________________________ 28

Chapter 3 Sounds ______________________________________________ 30

Overview ___________________________________________________ 31

3.1 Sound ______________________________________________________ 32

3.2 Sound wave properties and characteristics __________________________ 34

3.3 The sound waves of nature ______________________________________ 40

3.4 Sound Properties and their perception _____________________________ 43

3.5 Competencies _______________________________________________ 52

Chapter 4 Pronunciation ________________________________________ 53

Overview ____________________________________________________ 54

4.1 Pronouncing __________________________________________________ 55

4.2 English sounds _______________________________________________ 56

4.3 Competencies ________________________________________________ 57

Chapter 5 Sound Articulation and Manipulation _______________________ 58

Overview ___________________________________________________ 59

5.1 Articulatory phonetics __________________________________________ 60

5.2 Manner of articulation __________________________________________ 66

5.3 Place of articulation ___________________________________________ 70

5.4 Competencies ________________________________________________ 75

Chapter 6 Communication ______________________________________ 76

Overview ____________________________________________________ 77

6.1 Communication _______________________________________________ 78

6.2 Barriers to effective human comm. ________________________________ 81

6.3 Communication cycle __________________________________________ 84

6.4 Types of communication _______________________________________ 88

6.5 Forms of communication ______________________________________ 89

6.6 Components of comm. Process _________________________________ 92

6.7 Competencies ______________________________________________ 93

Chapter 7 Non-verbal Communication ____________________________ 94

Overview ____________________________________________________ 95

7.1 Non-verbal communication _______________________________________ 96

7.2 Haptic _______________________________________________________ 102

7.3 Interpreting non-verbal elements __________________________________ 103

7.4 Competencies ________________________________________________ 106

Chapter 8 International Phonetic Alphabet ________________________ 107

Overview ____________________________________________________ 108

8.1 International Phonetic Alphabet ___________________________________ 109

8.2 Consonants __________________________________________________ 116

8.3 Vowels ______________________________________________________ 121

8.4 Competencies ________________________________________________ 128

i... Teaching Strategies

ii... References

iii… Credits

Deciphering IPA 2

Rationale:

Communication is a big part in the unity, peace and oneness of a culture,

socialization and country because in communicating, we are able to reach out ourselves

to others and express our ideas to people, for them to know what we think and have in

mind.

Conversing has been the famous communicating style used in people‘s everyday life

and cycle. Through this we reach out to others and make others reach out to us in a

way we only know. We talk about things we know, things that we do, things that

happened and things that‘s been done. With a simple connection as this, we let many

people enter our lives.

Communication has all this functions going on in our society, building and uniting people

in a way none could do. When we talk or think about communication, what comes first

to our minds are talking, chatting and conversing. But how about writing and signs? ,

these are also part of communication, because communication has two kinds, the

verbal communication and the non-verbal communication. Non-verbal communication

includes sign language, symbols, body language, etc. Talking does not only comprise

communication alone, but many, as long as it conveys an idea or knowledge, then it is

eventually communicating.

One example of non-verbal communication and symbolism is the IPA, or the

International Phonetic Alphabet. It is characterized by symbols or letters that is based

on the International Standard Language Scale.

Putting an IPA symbol into words would also mean having a different sound, emphasis

and pronunciation of the word. Pronunciation of every letter or symbol depends on the

letter itself which is thereby based on the International System. In pronouncing every

letter, it should have the proper manipulation and articulation. All of these are some of

the IPA parts that could be learned by detail as you go along the module.

Deciphering IPA 3

Aims:

To know the Nature of IPA

To be familiarized with the IPA symbols as a guide to better one‘s pronunciation

To understand the significance of IPA

To be knowledgeable on sound manipulation and articulation

To be able to Transcribe IPA symbols.

Learning Outcomes:

Distinguish the different IPA symbols

Master the proper pronunciation of IPA symbols

Discuss how a sound is produced by the articulators

Understand the International sound manipulation

Duration: 5 months

Chapter One/Organs on Speech 5

Overview:

Language is purely human and a natural method of communicating ideas,

emotions, and desires by means of a system of voluntarily produced symbols. These

symbols are, in the first instance, auditory and they are produced by the so-called

―organs of speech.‖ There is no visible instinctive basis in human speech as such,

however much instinctive expressions and the natural environment may serve as an

incentive for the development of certain elements of speech, however much instinctive

tendencies, motor and other, may give a fixed range or mold to linguistic expression.

Such human or animal communication, if ―communication‖ it may be called, as is

brought about by involuntary, instinctive cries is not, in our sense, language at all. There

are, properly speaking, no organs of speech; there are only organs that are incidentally

useful in the production of speech sounds. The lungs, the larynx, the palate, the nose,

the tongue, the teeth, and the lips, are all so utilized, but they are no more to be thought

of as primary organs of speech than are the fingers to be considered as essentially

organs of piano-playing or the knees as organs of prayer. Speech is not a simple

activity that is carried on by one or more organs biologically adapted to the purpose. It is

an extremely complex and ever-shifting network of adjustments—in the brain, in the

nervous system, and in the articulating and auditory organs—tending towards the

desired end of communication. There are parts of our body used as an organ for

speaking with us not knowing it actually exist. In this chapter we would be able to find

out what the organs of speech are.

Deciphering IPA 2013

Chapter One/Organs on Speech 6

1.1

Speech Organs, How Important?

Speech organs produce the many sounds needed for language. Organs used include the lips, teeth, tongue, alveolar ridge, hard palate, velum (soft palate), uvula and glottis. Speech organs - otherwise articulators - are divided into two: passive articulators and active articulators. Passive articulators are those which remain static during the articulation of sound. Upper lips, upper teeth, alveolar ridge, hard palate etc. are the passive articulators. Active articulators move towards these passive articulators to produce various speech sounds, in different manner. The most important active articulator is tongue. Uvula, lower jaw which includes lower teeth and lower lip are the other active articulators.

Deciphering IPA 2013

Chapter One/Organs on Speech 7

A speech organ is active if it moves as sound is produced, whereas it is passive if there is no movement. Together with the lips, tongue, and teeth, the speech organs also include the alveolar ridge, uvula, palate, and glottis. Of these speech articulators, only the lower lip, tongue, and glottis are active. The mechanism of sound or voice production starts as air that is taken in flows through the glottis, resulting in the vibration of the vocal cords. This vibration pushes the air to flow out through the glottis to produce vibration in the vocal tract, producing sound.

Articulatory phonetics deals with how the speech organs work together. For instance, different sounds can be produced by the interaction between the lips and the teeth. Vowels are produced when the shape of the mouth changes through the coordination between the upper and lower lips, although the position of the tongue is also important. Consonants are produced by the coordination among the tongue, teeth, and palate.

Speech organs are also prone to stress, called vocal loading, due to several factors. Continued use of voice, speaking loudly for a long time, and speaking with an unusual pitch of the voice may cause strain on the speech organs. Smoking and dehydration may cause dryness in the throat area, affecting the quality of the voice. Vocal loading may be prevented by minimizing the use of voice, speaking with normal voice volume and pitch, avoiding smoking, and drinking enough fluids.

The human vocal tract is well built for maximizing the number of different sounds it can

produce. Particularly notable is the way the tongue can move freely in the mouth and

the modifications that support the ability

Speech and Vocal Tract

The vocal tract is divided into two parts, first one is called the oral tract which is highly

mobile and consists of the tongue, pharynx, plate, lips, and jaw etc...(Shown in fig 1).

The positions of these organs are varied to produce different speech sounds, which we

hear as the radiation from the lips or nostrils. The second one is the nasal tract where is

immobile but is coupled with oral tract by changing the position of the velum. The shape

of the vocal tract responds better for some basic frequency produced by vocal cord than

others; this is the essential mechanism for the production of different speech sounds.

The lowest resonance frequency for a particular shape of the vocal tract is called the

first formanant (f1) and next the second formanant frequency (f2) and so on.

Deciphering IPA 2013

Chapter One/Organs on Speech 8

Language and Speech

The purpose of speaking is to convey meaningful ideas to the listener. In order to do

this, the listener should be able to interpret the meaning of the spoken sounds. One way

of doing this is by providing a coding mechanism with set of rules enabling the listener

to interpret the meaning of the speech. The human being uses linguistics as the tool for

coding the information. The coding mechanism is not straightforward. The new ideas

are converted into linguist structure. This requires selection of appropriate words,

phrases. These words are ordered in sequence according to grammatical rules.

Sounds and speech

From the linguistic point of view the smallest speech unit is known as phonemes, which indicates a different in meaning and is normally written between slashes as for example /m/ in hum. In fact the sounds produced for individual phonemes vary depending on where it appears in a word, phonemes sets are different for different languages, as for example about 40 phonemes are sufficient to discriminate between all the sounds made in British English.

Phonemes are characterized in to six different groups. These are the vowels, diphthongs, semi vowels, stop constant, fricative and affricative. The grouping of these phonemes is based on the way these sounds are produced. Each phoneme is a combined version of the first three dominant formant frequencies which is originated due to vibration of the vocal cord. However the formant frequency largely varies depending on the speaker.

Deciphering IPA 2013

Chapter One/Organs on Speech 9

1.2

Speech Circuitry

The auditory region of the brain associated with speech (Broca's area and adjacent regions) shown under red shading is connected to the motor area controlling vocalization (Wernicke's area and neighboring regions). The connection is sketched under black shading. The numbers shown roughly identify the Broadmann regions.

The larynx ("voice box") containing the vocal folds and the glottis

The larynx, more commonly known as the voice box or the Adam's apple, is crucial in the production and differentiation of speech sounds. The larynx is located at exactly the point where the throat divides between the trachea (the windpipe), which leads to the lungs, and the esophagus (the tube that carries food or drink to the stomach).

Deciphering IPA 2013

Chapter One/Organs on Speech 10

Over the larynx is a flap called the epiglottis that closes off the trachea when we swallow. This prevents the passage of food into the lungs. When the epiglottis is folded back out of the way, the parts of the larynx that are involved in speech production can be seen.

1.3

The Vocal Folds

There are two thin sheets of tissue that stretch in a V-shaped fashion from the front to the back of the larynx. These are called the vocal folds. (You'll often hear vocal "cords," which is doesn't accurately convey the way the muscle works.) The space between the vocal folds is known as the glottis. The vocal folds can be positioned in different ways to create speech sounds.

Air passes through the vocal folds. If the vocal folds are open and air passes unobstructed, the vocal folds do not vibrate. Sounds produced this way are called voiceless. But if the vocal folds are held together and tense and air doesn't pass unobstructed, the sounds produced this way are call voiced.

Speech is produced by causing a column of enclosed air to vibrate. It is the same process, basically, as the production of sound by a wind instrument in music. Air is forced under pressure from the lungs trough the windpipe (trachea), to the voice box (larynx), a structure that sits on top of the windpipe and contains the vocal cords, as they are called. (These are not cords at all, really, and would be more properly named band sort membranes). The vocal cords have the capability of closing off entirely the opening (glottis) and can hold considerable air pressure (as when a person coughs or strains to lift a heave weight). They can also assume other positions. They may be wide open, allowing the air to pass unimpeded. Or they may be closed almost but not quite

Deciphering IPA 2013

Chapter One/Organs on Speech 11

completely, so that the scaping air, forced through the narrow opening between them causes them to vibrate like the reed in a musical instrument. This vibration makes the all-important vocal tone. Known technically as voice, without which speech would be impossible. Speech sounds that have this tone as part of their makeup are called voiced. And those without it are called unvoiced or voiceless. Varying the amount of tension on the vocal cords causes the vocal tone to vary in quality and in number of cycles per second; in other words, the timbre and pitch of the tone can be changed voluntarily, within limits. by the speakers.

The air stream issuing from the larynx w with or without voice can now be modified in many ways; that is, we are at the stage of articulation. Almost all the parts of the throat and lower head that are accessible to the air stream can take part in articulation. For discussion purposes, we can divide these parts into three groups; resonating cavities, articulators, and points of articulation.

RESONATING CAVITIES

The size, the shape, and the material composition of the vessel enclosing a vibrating air column all have important effects on the quality of the sound that comes from it. There are quite a few spaces in the speech tract that effect sounds by their resonating qualities; in acousting terms, their reinforcement (amplify) certain frequencies and suppress or weaken (dampen) others. In addition to the sinuses and other spaces in the head, which function passively and without the control of the speaker, the resonating cavities involved in speech production are these: the pharynx, the space formed by the root of the tongue and the walls of the throat, which affects the sound by its shape but is not actively used in English; the nose, which adds its quite distinctive quality to the sounds if the air is allowed to pass through it whether or not the mouth is involved at the same time; and finally, the mouth, the most important of all because it contains a number of highly mobile organs and can assume a tremendous number of different shapes.

ARTICULATORS

These are mobile organs that can be brought close to, or into contact with, various locations in the speech tract (known as points of articulation) so as to stop or impede the free passage of the air stream. The manner of articulation is determined by the kind of closure or near closure that is made, as well as its manner of release. The articulators are the lips, especially the lower one; the tongue, usually divided into four parts; tip, front, middle, and back; the uvula; and, to an extent, the jaw, through its role is minor (it is possible to speak quite clearly with the jaws clenched, as ventriloquist do).

Deciphering IPA 2013

Chapter One/Organs on Speech 12

POINTS OF ARTICULATION

These are fixed locations against which the mobile articulators operate in order to produce speech sounds: the teeth, the gums, the alveolar ridge, the various parts of the palate (sometimes called ―hard‖ palate to distinguish it from the ―soft‖ palate or velum), the velum, the walls of the pharynx and the glottis.

Elements of Speech

1. Articulation of words: Give ears to the patient‘s speech. Is he speaking the words clearly? Observe if he is having a nasal tone and also see how clear and distinct the words are. Can you clearly make out words from his speech? All these do come under articulation of speech.

A common disease associated with articulation of speech is Dysarthria. Dysarthria is nothing but defective articulation.

2. Loudness: Observe how loud the patient speaks. A depressive patient may remain silent or he may speak but would be hardly audible. This is characteristics of many psychotic disorders as well.

Deciphering IPA 2013

Chapter One/Organs on Speech 13

1.4

Descriptions and functions of some important organs of speech

Important organs:

1. Lips 7. Epiglottis

2. Teeth 8. Pharynx

3. Alveolar ridge 9. Soft palate

4. Tongue 10. Uvula

5. Larynx 11. Hard palate

6. Vocal cords

The vocal cords

The larynx contains two small bands of elastic tissues. They are called vocal cords.

The opening between the vocal cords is called epiglottis. When we breath in or out, the

glottis is open. This is the position of the production of voiceless sounds. e.g. /f/, /s/, /h/,

etc. are voiceless sounds in English. The sounds produced when the glottis comes

together are called voiced sounds. So the main function of the vocal cords is to produce

voiced and voiceless sounds.

Deciphering IPA 2013

Chapter One/Organs on Speech 14

The Soft Palate

The soft palate is also called velum. It is the roof of the mouth. It separates the oral

and nasal cavity. The last part of the soft palate is called uvula. When it is lowered, the

nasal sounds (/m, n, ŋ/) are produced. When it is raised, the air passes out through the

oral cavity and the oral sounds (/p, t, k, s, etc. /) are produced.

The tongue

The tongue is an important organ of speech. It has the greatest variety of

movement. It is divided into four parts: the tip, the blade, the front and the back. The

number of vowels is produced with the help of the tongue. Vowels differ from each other

because of the position of the tongue.

The tip of the tongue helps to produce /t, d, z, etc. /. The blade of the tongue helps to

produce /t∫, d , ∫, etc./. The front of the tongue helps to produce palatal sound /j/ and

the back of the tongue helps to produce /k, g/ sounds.

The lips

The upper lip and lower lip help to produce bilabial sounds /p, b, m/. If they are held

together, the sounds produced in that position are bilabial stops: / p, b/. If the lips are

held together, they produce different vowels.

________________________________________________________________________

Deciphering IPA 2013

Chapter One/Organs on Speech 15

_________________________________________________________________________________________________________________________________________

The teeth

The teeth take part in the production of consonant sounds. The upper teeth only take

part in the production of speech sounds. The lower teeth don't take part for the

production of sounds. The sound produced with the help of the upper teeth are called

dental sound ( , r)

The alveolar ridge

The alveolar ridge is the part between the upper teeth and the hard palate. The sound

produced with the tongue touching the alveolar ridge is called alveolar sounds, e.g. /s/,

/t/, /d/, etc.

Producing different speech sounds depends on the movement of speech organs. It is

essential to know the movement and the placement of each organ to produce particular

sounds. The above descriptions and functions of the organ of speech help you to guide

students to produce the consonants and vowels in a right way.

Uvula

The uvula is used to make guttural sounds. It helps to make nasal consonants by

stopping air from moving through the nose.

Epiglottis

The glottis is used in controlling the vibration made by the vocal chords, in order to

make different sounds.

Deciphering IPA 2013

Chapter One/Organs on Speech 16

1.5

Competencies:

1. Fill in the chart below. Write the corresponding answer on

Blank.

It is used to make guttural sounds.

The lips

Takes part on the production of consonant sounds.

It is also called as the velum.

The vocal chords

2. Illustrate how the speech circuit flows.

Chapter two/phonetics and phonology 18

Overview:

Words are composed of letters that has different and unique

sounds that makes the essence of the word existing. Words have

their own different function in our lives. Without words communication

will not exist because when we communicate, we use words to express what we

have in mind, what we feel inside and all our ideas and thoughts intact.

These words are composed of different letters, which at the same time,

have different sounds. These sounds makes the word be understood and

remembered as it is. Sounds give the meaning and identity to every word.

Every word changes its meaning when the sound of the word is not

delivered right. These different deliveries of sounds are called pronunciations.

There are different deliveries of words depending on the country were

you came. A French man reads differently from an English man. So it is

very important to learn how to pronounce a word because the meaning

of the word depends on how you pronounced the words. Many people

really gets confuse on the proper pronunciation of words because of the fact

that many street languages are popping nowadays like Gay lingo, Jejemons,

Harharmons, etc. We must know how to properly pronounce each word by

identifying each sound that composes the word so that all the sounds would

be incorporated to one and it would produce the exact pronunciation of the word.

In this chapter, one would be able to identify the different sounds and sound

patterns.

Deciphering IPA 2013

Chapter two/phonetics and phonology 19

2.1

Phonology

It is a branch of linguistics, concerned with the systematic organization of

sounds in languages. It has traditionally focused largely on study of the systems

of phonemes in particular languages, but it may also cover any linguistic analysis

either at a level beneath the word (including syllable onset and rhyme,

articulatory gestures, articulatory features, mora, etc.) or at all levels of language

where sound is considered to be structured for conveying linguistic meaning. It is

about patterns of sounds, especially different patterns of sounds in different

languages, or within each language, different patterns of sounds in different

positions in words etc. It is concerned with the physical properties of speech

sounds or signs (phones): their physiological production, acoustic properties,

auditory perception, and neurophysiological status. Phonology on the other hand,

is concerned with the abstract, grammatical characterization of systems of

sounds or signs. Phonetics, on the other hand, concerns itself with the

production, transmission, and perception of the physical phenomena which are

abstracted in the mind to constitute these speech sounds or signs.

Phonology cares about the entire sound system for a given language. The goal is to formulate a model/theory which explains not only the sound patterns found in a particular language, but the patterns found in all languages. Examples of questions which are interesting to phonologists are: How do sounds change due to the sounds around them? (For example, why does the plural of cat end with an ‗s‘-sound, the plural of dog end with a ‗z‘-sound, and the plural of dish end in something sounding like ‗iz‘?) How do sounds combine in a particular language? (For example, English allows‗t‘ and ‗b‘ to be followed by ‗l‘ – rattle, rabble, atlas, ablative – so why then does ‗blick‘ sound like a possible word in English when ‗tlick‘ does not?).

Deciphering IPA 2013

Chapter two/phonetics and phonology 20

Phonetics

Phonetics (pronounced /fəˈ nɛ tɠ ks/, from the Greek: φωνή, phōnē, 'sound, voice') is a branch of linguistics that comprises the study of the sounds of human speech, or—in the case of sign languages—the equivalent aspects of sign.[1] It is concerned with the physical properties of speech sounds or signs (phones): their physiological production, acoustic properties, auditory perception, and neurophysiological status. Phonology, on the other hand, is concerned with the abstract, grammatical characterization of systems of sounds or signs.

The field of phonetics is a multiple layered subject of linguistics that focuses on speech. In the case of oral languages there are three basic areas of study:

Articulatory phonetics: the study of the production of speech sounds by the articulatory and vocal tract by the speaker

Acoustic phonetics: the study of the physical transmission of speech sounds from the speaker to the listener

Auditory phonetics: the study of the reception and perception of speech sounds by the listener

In order to produce sound humans use various body parts including the lips, tongue, teeth, pharynx and lungs. Phonetics is the term for the description and classification of speech sounds, particularly how sounds are produced, transmitted and received. A phoneme is the smallest unit in the sound system of a language; for example, the t sounds in the word top.

Various phonetic alphabets have been developed to represent the speech

sounds in writing through the use of symbols. Some of these symbols are

identical to the Roman letters used in many language alphabets; for example: p

and b. Other symbols are based on the Greek alphabet, such as θ to represent

the th- sound in thin and thought. Still others have been specially invented; e.g. ð

for the th- sound in the and then. The most widely used phonetic script is the

International Phonetic Alphabet.

Deciphering IPA 2013

Chapter two/phonetics and phonology 21

2.2

Difference between phonetics and phonology

Phonetics simply describes the articulatory and acoustic properties of phones (speech sounds). Phonology studies how sounds interact as a system in a particular language. Stated another way, phonetics studies which sounds are present in a language; phonology studies how these sounds combine and how they change in combination, as well as which sounds can contrast to produce differences in meaning (phonology describes the phones as allophones of phonemes).

Redundant and contrastive features

Every language consists of speech sounds called phones. (Give example of various English sounds.) These different sounds do not all have the same status in the system of English phonology. The occurrence of certain phonetic features is entirely predictable, as is the case in English with voicing in sonorants, nasality of vowels, or length in vowels. Features whose presence is entirely predictable based on the phonetic environment are called redundant phonetic features: give example of the three English p's. Contrasts involving a redundant feature cannot be used to signal a change in meaning. Adding or removing a redundant, predictable feature results merely in a mispronunciation, not in a new meaning: cf. interchanging the different p's of English. In a broad phonetic transcription, redundant features can be ignored, and only the elements of pronunciation that are important for distinguishing meaning listed.

The occurrence of other sounds and features in a particular language is not predictable based on phonetic context. In English, for example, voicing in obstruent is never predictable: pat/bat/ tip/dip/ girl/curl; the contrast between fricatives and stops is also not predictable: send/tend. Or the difference between central and lateral in liquids: red/led; ball/bar. Such features are called distinctive, or contrastive, phonetic features, Phonetic features whose presence or absence can alter meaning are called phonemic features. The presence of a phonemic feature is not predictable according to phonetic context. Adding or subtracting a phonemic feature normally results in a change of meaning as well as in a change in pronunciation.

Deciphering IPA 2013

Chapter two/phonetics and phonology 22

Complementary and contrastive distribution

Phonetic features that are redundant in one language can be phonemic in the other. The two phones [r] and [l] are present in English and Korean but play an entirely different phonological role in each language. They are phonemically different in English, always signaling a difference in meaning: list/wrist war/wall. In Korean, [r] is word initial and [l] is syllable final: rupi ruby; mul water. The two sounds never contrast to produce a difference in meaning. In English the two sounds are in contrastive distribution; in Korean they are in complementary distribution. Similarly, features that are redundant in English may be phonemic in another language: aspiration in English and Mandarin Chinese: kha~n (to see) vs. ka~n (trunk, stem); tha# (pagoda) vs. ta# (beat, strike); phi@ng (a sound) vs. pi@ng (soldiers, army).

Phonemic analysis

How does a linguist determine which phonetic features in a given language are phonemic and which are not?

First, a phonetic inventory of speech sounds must be carried out. One needs to determine which speech sounds are present in a particular language in the first place. A linguist studying English for the first time, for instance, will soon discover that English has aspirated p, unaspirated p and non-released p, but no glottalized p, implosive p, or pharygealized p. The first stage of phonological analysis simply involves an exhaustive phonetic analysis.

Second, having determined which speech sounds occur in a particular language, the linguist must determine whether or not the phonetic difference between these sounds is redundant or phonemic. Phonology is concerned with determining which speech sounds contrast with one another to produce differences in meaning and which speech sounds are in complementary distribution and never produce meaninful contrasts. To find which sounds are redundant and which are phonemic, linguists usually try to find a pair of words with different meanings that differ formally by only a single sound.

Deciphering IPA 2013 ________________________________________________________________

Chapter two/phonetics and phonology 23 _________________________________________________________________

A pair of words which are distinguished by a difference in only one sound is called a minimal pair: pit/sit cat/cought; law/raw. In the case of a minimal pair, the two contrasting sounds are obviously capable of distinguishing meaning, so the difference between them is phonemic. The linguist will find that some speech sounds in a given language help form many minimal pairs, others only a few. The number of minimal pairs involving a particular sound is called the functional yield of that sound. There are 429 minimal pairs involving English sound [d] and only 32 involving [D]. Some sounds contrast directly with one another in only a few minimal pairs: ether/either, thigh/thy, dilution/delusion, Confucian/ confusion.

A third phase of phonological analysis is to try to remove from consideration all redundant phonetic features and focus only on the distinctive, phonemic features. The most widely employed means of accomplishing this is to group together all the sounds that actually occur in a language into contrastive sets called phonemes. For example, sounds which are in complementary distribution, such as the English sounds p, ph, non-released p, are treated as a single phonological unit, or phoneme, their redundant phonetic differences ignored. The actual phones that act as positional variants of one and the same phoneme are called allophones of that phoneme; thus, the three English p's are allophones of a single phoneme. The phoneme is an abstract unit. We don't hear or pronounce the phonemes of a language; we hear and pronounce their allophones.

The question then arises as to how to symbolize such an abstraction as the phoneme that is manifested as the three English p's. Since there are two ways of looking at the sound system of language: one phonetic and the other phonemic, there are also two types of transcriptions. The one we have been using up till now is a narrow transcription intended to portray as much phonetic information as possible. This type is called phonetic transcription and is enclosed in square brackets. The same IPA symbols can be used in phonetic transcription to transcribe any language of the world. Thus the phonetic symbol [l] represents the same sound in English and Korean.

The other type of transcription is called phonemic transcription and is enclosed in slanted brackets. Phonemic transcription ignores any redundant phonetic detail in a language and only portrays the distinctive and meaningful phonetic differences. Phonemic transcription represents phonemes, not the sounds as they are actually spoken.

Deciphering IPA 2013 _________________________________________________________________

Chapter two/phonetics and phonology 24 _________________________________________________________________

Choosing phonemic symbols

What symbol is chosen from among the phonetic symbols for the individual allophones is up to the linguist performing the phonological analysis and depends on several factors. Some phonemes have the same or nearly the same pronunciation in every phonetic environment, as is true of English [s], [m]. In such cases, the symbol for the phone can also be used as the symbol for the phoneme. If there is only one surface manifestation of a phoneme, then the phonetic symbol for that sound becomes the phonemic symbol, as well. If a phoneme is composed of several phonetically distinct allophones, however, then any of the following may be done:

a.) diacritics are removed from allophone symbols, simplifying the sound.

b.) the phonetic symbol for one of the allophones may be co-opted to stand for all the allophones (carrot instead of schwa, or o instead of O)

c.) the most common letter of the alphabet is chosen (t/T)

d) some compromise letter is chosen., perhaps not even a symbol from the phonetic alphabet (capital R for l/r in Korean).

Obviously, the process of choosing a phonemic symbol is somewhat arbitrary and up to whatever linguist is performing the analysis. Phonemic symbols are a type of phonetic shorthand that with specific value for one and only a single language; they are not universal like the symbols of the phonetic alphabet.

Thus, the value of symbols used in phonemic transcription is idiosyncratic and differs from language to language. Phonemic transcription depends upon the interrelationship of sounds in each particular language, whereas phonetic transcription depends simply on the pronunciation of each individual sound regardless of its function in the sound system of the given language. A phonetic symbol stands for one and the same sound regardless of language, but a phonemic symbol often stands for any one of several actual sounds. For example, the phonetic symbol [l] stands for the same sound in the phonetic transcription of English and Korean.

Deciphering IPA 2013 ___________________________________________________________________

Chapter two/phonetics and phonology 25 ___________________________________________________________________

A phonemic symbol such as /l/ usually stands for quite different collections of sounds in different languages. English, for example, has a phoneme that could be represented as /l/ with two different separate surface manifestations: velarized or non-velarized l; In phonetic transcription these two l-sounds would be written each with their own separate symbol. In phonemic transcription they would both be written with the same symbol /l/. In Korean the phonemic symbol /l/ would represent the allophones [r] and [l]. Remember that phonetic transcription, enclosed in square brackets, attempts to express as much phonetic detail as possible, redundant or otherwise; phonemic transcription does not mark redundant features, but rather is intended to represent only those phonetic details of a given language that are distinctive. Phonemic transcription, therefore, uses phonetic symbols in a way unique to each particular language. The phonemic symbols chosen in your handout are essentially the same sounds used in phonetic transcription minus the diacritical marks.

Phonology as grammar of phonetic patterns

The consonant cluster /st/ is OK at the beginning, middle or end of words in English.

At beginnings of words, /str/ is OK in English, but /ftr/ or / tr/ are not (they are ungrammatical).

/ tr/ is OK in the middle of words, however, e.g. in "ashtray".

/ tr/ is OK at the beginnings of words in German, though, and /ftr/ is OK word-initially in Russian, but not in English or German.

3. A given sound has a different function or status in the sound patterns of different languages

For example, the glottal stop [ ] occurs in both English and Arabic BUT...

In English, at the beginning of a word, [ ] is a just way of beginning vowels, and does not occur with consonants. In the middle or at the end of a word, [ ] is one possible pronunciation of /t/ in e.g. "pat" [pa ].

In Arabic, / / is a consonant sound like any other (/k/, /t/ or whatever): [ íktib] "write!‖ [da íi a] "minute (time)", [ a ] "right".

Deciphering IPA 2013 _____________________________________________________________________

Chapter two/phonetics and phonology 26 ___________________________________________________________________

4. Phonemes and allophones, or sounds and their variants

The vowels in the English words "cool", "whose" and "moon" are all similar but slightly different. They are three variants or allophones of the /u/ phoneme. The different variants are dependent on the different contexts in which they occur. Likewise, the consonant phoneme /k/ has different variant pronunciations in different contexts. Compare:

keep /kip/ The place of articulation is fronter in the mouth

[k+h]

cart /k t/ The place of articulation is not so front in the mouth

[kh]

coot /kut/ The place of articulation is backer, and the lips are rounded

[khw]

seek /sik/ There is less aspiration than in initial position

[k`]

scoop /skup/ There is no aspiration after /s/ [k]

These are all examples of variants according to position (contextual variants). There are also variants between speakers and dialects. For example, "toad" may

be pronounced [tëUd] in high-register RP, [toUd] or [to d] in the North. All of them are different pronunciations of the same sequence of phonemes. But these differences can lead to confusion: [toUd] is "toad" in one dialect, but may be "told" in another.

5. Phonological systems

Phonology is not just (or even mainly) concerned with categories or objects (such as consonants, vowels, phonemes, allophones, etc.) but is also crucially about relations. For example, the English stops and fricatives can be grouped into related pairs which differ in voicing and (for the stops) aspiration:

Voiceless/aspirated

ph

th

kh

f s

h

Voiced/unaspirated

b d

v z ð

(unpaired)

Deciphering IPA 2013 ___________________________________________________________________

Chapter two/phonetics and phonology 27 __________________________________________________________________

Patterns lead to expectations: we expect the voiceless fricative [h] to be paired with a voiced [ ], but we do not find this sound as a distinctive phoneme in English. And in fact /h/ functions differently from the other voiceless fricatives (it has a different distribution in words etc.) So even though [h] is phonetically classed as a voiceless fricative, it is phonologically quite different from /f/, /s/, / /

and / /.

Different patterns are found in other languages. In Classical Greek a three-way distinction was made between stops:

Voiceless/aspirated ph th kh Voiced/unaspirated p t k Voiced (and unaspirated) b d

In Hindi-Urdu a four-way pattern is found, at five places of articulation:

Voiceless aspirated ph th h ch kh

Voiceless unaspirated p t

c k Voiced unaspirated b d etc.

Breathy voiced ("voiced aspirates") b d etc.

How many degrees of vowel height are there in Bulgarian? On the face of things, it appears to be not very different from Tübatulabal, which has three heights: three high vowels, two mid vowels and one low vowel. But if we look more closely into Bulgarian phonology, we see that the fact that schwa is similar in height to /e/ and /o/ is coincidental: the distinction that matters in Bulgarian is /i/ vs. /e/, /u/ vs. /o/ and / / vs. /a/, i.e. relatively high vs. relatively low. As evidence for this statement, note that while all six vowels may occur in stressed syllables, only /i/, /e/, / / and /u/ occur in unstressed syllables.

Deciphering IPA 2013

________________________________________________________________

Chapter two/phonetics and phonology 28

________________________________________________________________

2.3

Competencies:

1. Draw a diagram that shows how phonology is related to

phonetics.

2. Make an illustration on how the mouth makes an aspirated

and voiced word.

Chapter three/sounds 30

______________________________________________________________

Overview:

In every conversation that we encounter in our daily lives is the presence of

different, yet distinct sound that materializes the conception of a word or phrase.

These phrases are pact together, forming sentences that we master as we grow

of age.

This chapter talks entirely about sounds, how they are formed, used, and

formulated so as to become a phrase or word. Sounds are very important tool on

man‘s daily life. If we would try to think deeply, communication is what makes us

human. It is the very tool on which people intersects on terms of ideas, emotion

and feelings. With these realities, we are able to materialize the importance and

significance of sound itself. What is the tiniest foundation of a word? , What is

there in every phrase or word? These things would make us realize that having

sound as the root of all words and phrases makes it the most important6 thing in

the world. Being able to produce a sound means being able to communicate, and

being able to communicate means being able to live. Living is all about

communicating and reaching out as the old adage goes, ―no man is an island‖.

People are born to interact and interacting needs communication for it to

successfully take place.

Deciphering IPA 2013

Chapter three/sounds 31

3.1

Sound

A drum produces sound via a vibrating membrane. Sound is a mechanical wave that is an oscillation of pressure transmitted

through some medium (like air or water), composed of frequencies within the

range of hearing

Deciphering IPA 2013

Chapter three/sounds 32

Acoustics:

Audio engineers in R&D design audio equipment

Acoustics is the interdisciplinary science that deals with the study of all mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound and infrasound. A scientist who works in the field of acoustics is an acoustician while someone working in the field of acoustical engineering may be called an acoustic or audio engineer. The application of acoustics can be seen in almost all aspects of modern society, sub disciplines include: Aero acoustics, Audio signal processing, Architectural acoustics, Bioacoustics, Electroacoustic, Environmental noise, Musical acoustics, Noise control, Psychoacoustics, Speech, Ultrasound, Underwater acoustics and vibration.

[2]

Propagation of sound

Sound is a sequence of waves of pressure that propagates through

compressible media such as air or water. (Sound can propagate through solids

as well, but there are additional modes of propagation). Sound that is perceptible

by humans has frequencies from about 20 Hz to 20,000 Hz. In air at standard

temperature and pressure, the corresponding wavelengths of sound waves range

from 17 m to 17 mm. During propagation, waves can be reflected, refracted, or

attenuated by the medium.[3]

Deciphering IPA 2013

Chapter three/sounds 33

The behavior of sound propagation is generally affected by three things:

A relationship between density and pressure. This relationship, affected by temperature, determines the speed of sound within the medium.

The propagation is also affected by the motion of the medium itself. For example, sound moving through wind. Independent of the motion of sound through the medium, if the medium is moving, the sound is further transported.

The viscosity of the medium also affects the motion of sound waves. It determines the rate at which sound is attenuated. For many media, such as air or water, attenuation due to viscosity is negligible.

When sound is moving through a medium that does not have constant physical

properties, it may be refracted (either dispersed or focused.

Spherical compression waves

The mechanical vibrations that can be interpreted as sound are able to travel through all forms of matter: gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium. Sound cannot travel through a vacuum.

Deciphering IPA 2013

Chapter three/sounds 34

Longitudinal and transverse waves

Sound is transmitted through gases, plasma, and liquids as longitudinal waves, also called compression waves. Through solids, however, it can be transmitted as both longitudinal waves and transverse waves. Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction, while transverse waves (in solids) are waves of alternating shear stress at right angle to the direction of propagation.

Matter in the medium is periodically displaced by a sound wave, and thus oscillates. The energy carried by the sound wave converts back and forth between the potential energy of the extra compression (in case of longitudinal waves) or lateral displacement strain (in case of transverse waves) of the matter and the kinetic energy of the oscillations of the medium.

3.2

Sound wave properties and characteristics

Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time.

Deciphering IPA 2013

Chapter three/sounds 35

Sound waves are often simplified to a description in terms of sinusoidal plane waves, which are characterized by these generic properties:

Frequency, or its inverse, the period Wavelength Wavenumber Amplitude Sound pressure Sound intensity Speed of sound Direction

Speed of sound

The speed of sound depends on the medium the waves pass through, and is a fundamental property of the material. In general, the speed of sound is proportional to the square root of the ratio of the elastic modulus (stiffness) of the medium to its density. Those physical properties and the speed of sound change with ambient conditions. For example, the speed of sound in gases depends on temperature. In 20 °C (68 °F) air at sea level, the speed of sound is approximately 343 m/s (1,230 km/h; 767 mph) using the formula "v = (331 + 0.6 T) m/s". In fresh water, also at 20 °C, the speed of sound is approximately 1,482 m/s (5,335 km/h; 3,315 mph). In steel, the speed of sound is about 5,960 m/s (21,460 km/h; 13,330 mph).[6] The speed of sound is also slightly sensitive (a second-order anharmonic effect) to the sound amplitude, which means that there are nonlinear propagation effects, such as the production of harmonics and mixed tones not present in the original sound (see parametric array).

Deciphering IPA 2013

Chapter three/sounds 36

Perception of sound

Human ear

The perception of sound in any organism is limited to a certain range of frequencies. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz),[7] although these limits are not definite. The upper limit generally decreases with age. Other species have a different range of hearing. For example, dogs can perceive vibrations higher than 20 kHz, but are deaf to anything below 40 Hz. As a signal perceived by one of the major senses, sound is used by many species for detecting danger, navigation, predation, and communication. Earth's atmosphere, water, and virtually any physical phenomenon, such as fire, rain, wind, surf, or earthquake, produces (and is characterized by) its unique sounds. Many species, such as frogs, birds, marine and terrestrial mammals, have also developed special organs to produce sound. In some species, these produce song and speech. Furthermore, humans have developed culture and technology (such as music, telephone and radio) that allows them to generate record, transmit, and broadcast sound. The scientific study of human sound perception is known as psychoacoustics.

Noise

Noise is a term often used to refer to an unwanted sound. In science and engineering, noise is an undesirable component that obscures a wanted signal.

Deciphering IPA 2013

Chapter three/sounds 37

Sound pressure level

Sound pressure is the difference, in a given medium, between average local pressure and the pressure in the sound wave. A square of this difference (i.e., a square of the deviation from the equilibrium pressure) is usually averaged over time and/or space, and a square root of this average provides a root mean square (RMS) value. For example, 1 Pa RMS sound pressure (94 dBSPL) in atmospheric air implies that the actual pressure in the sound wave oscillates

between (1 atm Pa) and (1 atm Pa), that is between 101323.6 and 101326.4 Pa. As the human ear can detect sounds with a wide range of amplitudes, sound pressure is often measured as a level on a logarithmic decibel scale. The sound pressure level (SPL) or Lp is defined as

Deciphering IPA 2013

Sound measurements

Sound pressure p, SPL

Particle velocity v, SVL

Particle displacement ξ

Sound intensity I, SIL

Sound power Pac

Sound power level SWL

Sound energy

Sound exposure E

Sound exposure level SEL

Sound energy density E

Sound energy flux q

Acoustic impedance Z

Chapter three/sounds 38

Where p is the root-mean-square sound pressure and is a reference sound pressure. Commonly used reference sound pressures, defined in the standard ANSI S1.1-1994, are 20 µPa in air and 1 µPa in water. Without a specified reference sound pressure, a value expressed in decibels cannot represent a sound pressure level.

Since the human ear does not have a flat spectral response, sound pressures are often frequency weighted so that the measured level matches perceived levels more closely. The International Electro technical Commission (IEC) has defined several weighting schemes. A-weighting attempts to match the response of the human ear to noise and A-weighted sound pressure levels are labeled dBA. C-weighting is used to measure peak levels.

Equipment for dealing with sound

Equipment for outputting or generating : musical instrument, sound box, hearing phones, sonar systems, sound reproduction, and broadcasting equipment. Many of these use electro-acoustic transducers for input: microphones .

Sound measurement

Decibel, Sone, Mel, Phon, Hertz Sound pressure level, Sound pressure Particle velocity, Acoustic velocity Particle displacement, Particle amplitude, Particle acceleration Sound power, Acoustic power, Sound power level Sound energy flux Sound intensity, Acoustic intensity, Sound intensity level Acoustic impedance, Sound impedance, Characteristic impedance Speed of sound, Amplitude

Deciphering IPA 2013

Chapter three/sounds 39

Sound check – The Sound Symposia

Automotive manufacturers have been fiddling with the way their products sound, both on the outside and on the inside, for a few decades now. This practice might be a bit better known in regards to motorbikes, but it is not strange to cars either. In recent times, BMW has employed an actual digital sound file, sound inspected and digitally perfected, to offer the M5 owners what they consider to be the proper sense of aggressiveness when driving at pace. Porsche as well did something similar; however they went with a more old-school option, employing sound symposia in order to feed the growl of the 911′s flat-six into the cabin. A sound symposia is a diaphragm that is electrically controlled which is placed in a specially tuned sound tube meant to funnel the proper engine noises into the passenger compartment. The electrical component of the diaphragm means that it is opened and closed by the vehicle‘s on-board electronics so that you can basically turn the engine snarl off and on, at the push of a button.

Deciphering IPA 2013

Chapter three/sounds 40

3.3

THE SOUND WAVES OF NATURE

Nature sounds can be separated in two main groups: first one includes the sounds

produced by animals, while the second consists of sounds produced by natural

phenomena such as weather and meteorological occurrences.

During the history, sounds of nature, especially animal sounds, have been objects of

imitation of tribal people (and even of devotion when they have been related to their

belief systems). Even today imitation of nature sounds is used in many shamanic rituals

and healing techniques.

Apart from that, sounds of nature have many positive effects on humans. Being in

nature, surrounded by pure acoustics of the environment, gives a feeling of

overwhelming calm that is hard to experience in urban surroundings.

Deciphering IPA 2013

Chapter three/sounds 41

Thunderstorm sounds

Natural phenomena have the ability to attract full attention of all living beings.

Acting in cycles, nature is constantly repeating and showing its universal powers,

causing the strongest fascination for humans since the beginning of the mankind.

All native and ancient people have mythologies firmly connected with many

different occurrences in the nature.

Probably the most mythologized natural phenomenon is thunder or thunderstorm.

Ancient Romans related thunders with the god of war Jupiter, who hurled lighting

bolds forged by Vulcan. First Christians accepted the ideas of Aristotle that fierce

storms were the work of God. Native Americans associated thunderstorms with

the Thunderbird. According to their rich mythology, this bird is a servant of the

Great Spirit.

Deciphering IPA 2013

________________________________________________________________

Chapter three/sounds 42

Sound of rain

In his masterpiece novel ―One Hundred Years of Solitude‖ the famous Columbian

writer Gabriel Garcia Marques wrote an unforgettable passage about a man who

built a special aluminum roof over his new house only to listen to the sound of

rain while making love with his wife.

Even if they have no empirical facts about the effects that rain sounds produce in

human behavior, great artists as Marques have the intuitive power to accept,

experience, and to transform the art of the nature into great pieces of human

made art. Making love while the sound of rain is caressing you – that is

something!

However, it is not only this connection to the sound of rain, of course. There are

numerous soothing effects on a human mind initiated by a rain sound

Deciphering IPA 2013

Chapter three/sounds 43

3.4

Sound Properties and Their Perception

Pitch and Frequency

A sound wave, like any other wave, is introduced into a medium by a vibrating object. The vibrating object is the source of the disturbance that moves through the medium. The vibrating object that creates the disturbance could be the vocal cords of a person, the vibrating string and sound board of a guitar or violin, the vibrating tines of a tuning fork, or the vibrating diaphragm of a radio speaker. Regardless of what vibrating object is creating the sound wave, the particle of the medium through which the sound moves is vibrating in a back and forth motion at a given frequency. The frequency of a wave refers to how often the particles of the medium vibrate when a wave passes through the medium. The frequency of a wave is measured as the number of complete back-and-forth vibrations of a particle of the medium per unit of time. If a particle of air undergoes 1000 longitudinal vibrations in 2 seconds, then the frequency of the wave would be 500 vibrations per second. A commonly used unit for frequency is the Hertz (abbreviated Hz), where

1 Hertz = 1 vibration/second

As a sound wave moves through a medium, each particle of the medium vibrates at the same frequency. This is sensible since each particle vibrates due to the motion of its nearest neighbor. The first particle of the medium begins vibrating, at say 500 Hz, and begins to set the second particle into vibrational motion at the same frequency of 500 Hz. The second particle begins vibrating at 500 Hz and thus sets the third particle of the medium into vibrational motion at 500 Hz. The process continues throughout the medium; each particle vibrates at the same frequency. And of course the frequency at which each particle vibrates is the same as the frequency of the original source of the sound wave. Subsequently, a guitar string vibrating at 500 Hz will set the air particles in the room vibrating at the same frequency of 500 Hz, which carries a sound signal to the ear of a listener, which is detected as a 500 Hz sound wave.

Deciphering IPA 2013

Chapter three/sounds 44

The back-and-forth vibrational motion of the particles of the medium would not be the only observable phenomenon occurring at a given frequency. Since a sound wave is a pressure wave, a detector could be used to detect oscillations in pressure from a high pressure to a low pressure and back to a high pressure. As the compressions (high pressure) and rarefactions (low pressure) move through the medium, they would reach the detector at a given frequency. For example, a compression would reach the detector 500 times per second if the frequency of the wave were 500 Hz. Similarly, a rarefaction would reach the detector 500 times per second if the frequency of the wave were 500 Hz. The frequency of a sound wave not only refers to the number of back-and-forth vibrations of the particles per unit of time, but also refers to the number of compressions or rarefactions that pass a given point per unit of time. A detector could be used to detect the frequency of these pressure oscillations over a given period of time. The typical output provided by such a detector is a pressure-time plot as shown below.

Since a pressure-time plot shows the fluctuations in pressure over time, the period of the sound wave can be found by measuring the time between successive high pressure points (corresponding to the compressions) or the time between successive low pressure points (corresponding to the rarefactions). As discussed in an earlier unit, the frequency is simply the reciprocal of the period. For this reason, a sound wave with a high frequency would correspond to a pressure time plot with a small period - that is, a plot corresponding to a small amount of time between successive high pressure points. Conversely, a sound wave with a low frequency would correspond to a pressure time plot with a large period - that is, a plot corresponding to a large amount of time between successive high pressure points. The diagram below shows two pressure-time plots, one corresponding to a high frequency and the other to a low frequency.

Deciphering IPA 2013

Chapter three/sounds 45

The ears of a human (and other animals) are sensitive detectors capable of detecting the fluctuations in air pressure that impinge upon the eardrum. The mechanics of the ear's detection ability will be discussed later in this lesson. For now, it is sufficient to say that the human ear is capable of detecting sound waves with a wide range of frequencies, ranging between approximately 20 Hz to 20 000 Hz. Any sound with a frequency below the audible range of hearing (i.e., less than 20 Hz) is known as an infrasound and any sound with a frequency above the audible range of hearing (i.e., more than 20 000 Hz) is known as an ultrasound. Humans are not alone in their ability to detect a wide range of frequencies. Dogs can detect frequencies as low as approximately 50 Hz and as high as 45 000 Hz. Cats can detect frequencies as low as approximately 45 Hz and as high as 85 000 Hz. Bats, being nocturnal creature, must rely on sound echolocation for navigation and hunting. Bats can detect frequencies as high as 120 000 Hz. Dolphins can detect frequencies as high as 200 000 Hz. While dogs, cats, bats, and dolphins have an unusual ability to detect ultrasound, an elephant possesses the unusual ability to detect infrasound, having an audible range from approximately 5 Hz to approximately 10 000 Hz. The sensation of a frequency is commonly referred to as the pitch of a sound. A high pitch sound corresponds to a high frequency sound wave and a low pitch sound corresponds to a low frequency sound wave. Amazingly, many people, especially those who have been musically trained, are capable of detecting a difference in frequency between two separate sounds that is as little as 2 Hz. When two sounds with a frequency difference of greater than 7 Hz are played simultaneously, most people are capable of detecting the presence of a complex wave pattern resulting from the interference and superposition of the two sound waves. Certain sound waves when played (and heard) simultaneously will produce a particularly pleasant sensation when heard, are said to be consonant. Such sound waves form the basis of intervals in music.

Deciphering IPA 2013

Chapter three/sounds 46

For example, any two sounds whose frequencies make a 2:1 ratio are said to be separated by an octave and result in a particularly pleasing sensation when heard. That is, two sound waves sound good when played together if one sound has twice the frequency of the other. Similarly two sounds with a frequency ratio of 5:4 are said to be separated by an interval of a third; such sound waves also sound good when played together. Examples of other sound wave intervals and their respective frequency ratios are listed in the table below.

Interval Frequency Ratio Examples

Octave 2:1 512 Hz and 256 Hz

Third 5:4 320 Hz and 256 Hz

Fourth 4:3 342 Hz and 256 Hz

Fifth 3:2 384 Hz and 256 Hz

The ability of humans to perceive pitch is associated with the frequency of the sound wave that impinges upon the ear. Because sound waves traveling through air are longitudinal waves that produce high- and low-pressure disturbances of the particles of the air at a given frequency, the ear has an ability to detect such frequencies and associate them with the pitch of the sound. But pitch is not the only property of a sound wave detectable by the human ear. In the next part of Lesson 2, we will investigate the ability of the ear to perceive the intensity of a sound wave.

Deciphering IPA 2013

Chapter three/sounds 47

Behavior of Sound Waves

Reflection, Refraction, and Diffraction

Like any wave, a sound wave doesn't just stop when it reaches the end of the medium or when it encounters an obstacle in its path. Rather, a sound wave will undergo certain behaviors when it encounters the end of the medium or an obstacle. Possible behaviors include reflection off the obstacle, diffraction around the obstacle, and transmission (accompanied by refraction) into the obstacle or new medium. In this part of Lesson 3, we will investigate behaviors that have already been discussed in a previous unit and apply them towards the reflection, diffraction, and refraction of sound waves.

When a wave reaches the boundary between one medium another medium, a portion of the wave undergoes reflection and a portion of the wave undergoes transmission across the boundary. As discussed in the previous part of Lesson 3, the amount of reflection is dependent upon the dissimilarity of the two media. For this reason, acoustically minded builders of auditoriums and concert halls avoid the use of hard, smooth materials in the construction of their inside halls. A hard material such as concrete is as dissimilar as can be to the air through which the sound moves; subsequently, most of the sound wave is reflected by the walls and little is absorbed. Walls and ceilings of concert halls are made softer materials such as fiberglass and acoustic tiles. These materials are more similar to air than concrete and thus have a greater ability to absorb sound. This gives the room more pleasing acoustic properties.

Reflection of sound waves off of surfaces can lead to one of two phenomena - an echo or a reverberation. A reverberation often occurs in a small room with height, width, and length dimensions of approximately 17 meters or less. Why the magical 17 meters? The affect of a particular sound wave upon the brain endures for more than a tiny fraction of a second; the human brain keeps a sound in memory for up to 0.1 seconds. If a reflected sound wave reaches the ear within 0.1 seconds of the initial sound, then it seems to the person that the sound is prolonged.

Deciphering IPA 2013

Chapter three/sounds 48

The reception of multiple reflections off of walls and ceilings within 0.1 seconds of each other causes reverberations - the prolonging of a sound. Since sound waves travel at about 340 m/s at room temperature, it will take approximately 0.1 s for a sound to travel the length of a 17 meter room and back, thus causing a reverberation (recall from Lesson 2, t = v/d = (340 m/s)/(34 m) = 0.1 s). This is why reverberations are common in rooms with dimensions of approximately 17 meters or less. Perhaps you have observed reverberations when talking in an empty room, when honking the horn while driving through a highway tunnel or underpass, or when singing in the shower. In auditoriums and concert halls, reverberations occasionally occur and lead to the displeasing garbling of a sound.

But reflection of sound waves in auditoriums and concert halls do not always lead to displeasing results, especially if the reflections are designed right. Smooth walls have a tendency to direct sound waves in a specific direction. Subsequently the use of smooth walls in an auditorium will cause spectators to receive a large amount of sound from one location along the wall; there would be only one possible path by which sound waves could travel from the speakers to the listener. The auditorium would not seem to be as lively and full of sound. Rough walls tend to diffuse sound, reflecting it in a variety of directions. This allows a spectator to perceive sounds from every part of the room, making it seem lively and full. For this reason, auditorium and concert hall designers prefer construction materials that are rough rather than smooth.

Deciphering IPA 2013

Chapter three/sounds 49

Reflection of sound waves also leads to echoes. Echoes are different than reverberations. Echoes occur when a reflected sound wave reaches the ear more than 0.1 seconds after the original sound wave was heard. If the elapsed time between the arrivals of the two sound waves is more than 0.1 seconds, then the sensation of the first sound will have died out. In this case, the arrival of the second sound wave will be perceived as a second sound rather than the prolonging of the first sound. There will be an echo instead of a reverberation.

Reflection of sound waves off of surfaces is also affected by the shape of the

surface. As mentioned of water waves in Unit 10, flat or plane surfaces reflect

sound waves in such a way that the angle at which the wave approaches the

surface equals the angle at which the wave leaves the surface. This principle will

be extended to the reflective behavior of light waves off of plane surfaces in great

detail in Unit 13 of The Physics Classroom. Reflection of sound waves off of

curved surfaces leads to a more interesting phenomenon. Curved surfaces with a

parabolic shape have the habit of focusing sound waves to a point. Sound waves

reflecting off of parabolic surfaces concentrate all their energy to a single point in

space; at that point, the sound is amplified. Perhaps you have seen a museum

exhibit that utilizes a parabolic-shaped disk to collect a large amount of sound

and focus it at a focal point. If you place your ear at the focal point,

You can hear even the faintest whisper of a friend standing across the room.

Parabolic-shaped satellite disks use this same principle of reflection to gather

large amounts of electromagnetic waves and focus it at a point (where the

receptor is located). Scientists have recently discovered some evidence that

seems to reveal that a bull moose utilizes his antlers as a satellite disk to gather

and focus sound. Finally, scientists have long believed that owls are equipped

with spherical facial disks that can be maneuvered in order to gather and reflect

sound towards their ears. The reflective behavior of light waves off curved

surfaces will be studies in great detail in Unit 13 of The Physics Classroom

Tutorial.

Deciphering IPA 2013

Chapter three/sounds 50

Diffraction of Sound Waves

Diffraction involves a change in direction of waves as they pass through an opening or around a barrier in their path. The diffraction of water waves was discussed in Unit 10 of The Physics Classroom Tutorial. In that unit, we saw that water waves have the ability to travel around corners, around obstacles and through openings. The amount of diffraction (the sharpness of the bending) increases with increasing wavelength and decreases with decreasing wavelength. In fact, when the wavelength of the wave is smaller than the obstacle or opening, no noticeable diffraction occurs.

Diffraction of sound waves is commonly observed; we notice sound diffracting around corners or through door openings, allowing us to hear others who are speaking to us from adjacent rooms. Many forest-dwelling birds take advantage of the diffractive ability of long-wavelength sound waves. Owls for instance are able to communicate across long distances due to the fact that their long-wavelength hoots are able to diffract around forest trees and carry farther than the short-wavelength tweets of songbirds. Low-pitched (long wavelength) sounds always carry further than high-pitched (short wavelength) sounds.

Scientists have recently learned that elephants emit infrasonic waves of very low frequency to communicate over long distances to each other. Elephants typically migrate in large herds that may sometimes become separated from each other by distances of several miles. Researchers who have observed elephant migrations from the air and have been both impressed and puzzled by the ability of elephants at the beginning and the end of these herds to make extremely synchronized movements. The matriarch at the front of the herd might make a turn to the right, which is immediately followed by elephants at the end of the herd making the same turn to the right. These synchronized movements occur despite the fact that the elephants' vision of each other is blocked by dense vegetation. Only recently have they learned that the synchronized movements are preceded by infrasonic communication. While low wavelength sound waves are unable to diffract around the dense vegetation, the high wavelength sounds produced by the elephants have sufficient diffractive ability to communicate long distances.

Deciphering IPA 2013

Chapter three/sounds 51

Bats use high frequency (low wavelength) ultrasonic waves in order to enhance their ability to hunt. The typical prey of a bat is the moth - an object not much larger than a couple of centimeters. Bats use ultrasonic echolocation methods to detect the presence of bats in the air. But why ultrasound? The answer lies in the physics of diffraction. As the wavelength of a wave becomes smaller than the obstacle that it encounters, the wave is no longer able to diffract around the obstacle, instead the wave reflects off the obstacle. Bats use ultrasonic waves with wavelengths smaller than the dimensions of their prey. These sound waves will encounter the prey, and instead of diffracting around the prey, will reflect off the prey and allow the bat to hunt by means of echolocation. The wavelength of a 50 000 Hz sound wave in air (speed of approximately 340 m/s) can be calculated as follows :

Wavelength = speed/frequency

Wavelength = (340 m/s)/ (50 000 Hz)

Wavelength = 0.0068 m

The wavelength of the 50 000 Hz sound waves (typical for a bat) is approximately 0.7 centimeters, smaller than the dimensions of a typical moth.

Refraction of waves involves a change in the direction of waves as they pass from one medium to another. Refraction, or bending of the path of the waves, is accompanied by a change in speed and wavelength of the waves. So if the media (or its properties) are changed, the speed of the wave is changed. Thus, waves passing from one medium to another will undergo refraction. Refraction of sound waves is most evident in situations in which the sound wave passes through a medium with gradually varying properties. For example, sound waves are known to refract when traveling over water. Even though the sound wave is not exactly changing media, it is traveling through a medium with varying properties; thus, the wave will encounter refraction and change its direction. Since water has a moderating affect upon the temperature of air, the air directly above the water tends to be cooler than the air far above the water. Sound waves travel slower in cooler air than they do in warmer air. For this reason, the portion of the wave front directly above the water is slowed down, while the portion of the wave fronts far above the water speeds ahead. Subsequently, the direction of the wave changes, refracting downwards towards the water. This is depicted in the diagram at the right.

Deciphering IPA 2013

Chapter three/sounds 52

3.5

Competencies:

1. On a 1/2 sheet of illustration board, draw how sounds reflect, diffract

and refract.

2. Choose one song with high pitch level and another with low pitch level,

and sing it in front of your classmates.

Chapter four/pronunciation 54

Overview:

Words have different faces and different categories which depend on the

kind of language used by the people. English language as the universal language

is mostly used in the different parts of the world may it be in the northern,

southern, eastern and western. But even the English language has many faces

when it comes to the pronunciation of it. The pronunciation of the English

language depends on where you came from, because each point has different

dictions. The British have their British accents, the Russian accent, the American

English accent and the European accent. Words pronunciation of the English

language differs in the bases of diction.

In this chapter are the standard American pronunciations of words that

define the standard and original pronunciation of some words that are commonly

used in our daily life to avoid confusion. You would be able to know how to

pronounce words as you go along the chapter.

Deciphering IPA 2013

Chapter four/pronunciation 55

4.1

Pronouncing It is the manner of putting a word into life. It is how a word is delivered and how a

word is understood. It gives the word its life as a word. It also initializes a words

meaning, as to how it would be used by the people and how it would be perceived as a

word. It also gives the word its identity as a word as to how and what it means by the

moment it is convened to others.

There are many factors that are related to the whole process of pronunciation. It

includes the organ used which is the mouth.

Diction

Pronounced (dic-shun) (Latin: dictionem (nom. dictio) "a saying, expression, word"),[1] in its original, primary meaning, refers to the writer's or the speaker's distinctive vocabulary choices and style of expression in a poem or story.[2][3] A secondary, common meaning of "diction" means the distinctiveness of speech,[3][4][5] the art of speaking so that each word is clearly heard and understood to its fullest complexity and extremity, and concerns pronunciation and tone, rather than word choice and style. This secondary sense is more precisely and commonly expressed with the term enunciation, or with its synonym articulation.[6]

Diction has multiple concerns; register—words being either formal or informal in social context—is foremost. Literary diction analysis reveals how a passage establishes tone and characterization, e.g. a preponderance of verbs relating physical movement suggests an active character, while a preponderance of verbs relating states of mind portrays an introspective character. Diction also has an impact upon word choice and syntax.

Diction comprises eight elements: Phoneme, Syllable, Conjunction, Connective, Noun, Verb, Inflection, and Utterance.

Deciphering IPA 2013

Chapter four/pronunciation 56

4.2

English Sounds

Heat/ hit

Head/hed

Hot/hawt

Hood/hud

Hat/haat

Bit/beat

Neat/niit

Thigh/thay

Feet/fit

Wit/wit

Keep/kip

Ride/rayd

Laugh/laf

Care/keer

Renaissance/renasans

Debut/debu

Chorale/koral

Meriam/merayam

Bouquet/bokey

Façade/fasad

Chores/chors

Tight/tyt

Abattoir/abator

Sabotage/sabotash

Deciphering IPA 2013

Chapter four/pronunciation 57

Parts of Speech

part of speech

function or "job" example words example sentences

Verb action or state (to) be, have, do, like, work, sing, can, must

EnglishClub.com is a web site. I like EnglishClub.com.

Noun thing or person

pen, dog, work, music, town, London, teacher, John

This is my dog. He lives in my house. We live in London.

Adjective describes a noun

a/an, the, 2, some, good, big, red, well, interesting

I have two dogs. My dogs are big. I like big dogs.

Adverb

describes a verb, adjective or adverb

quickly, silently, well, badly, very, really

My dog eats quickly. When he is very hungry, he eats really quickly.

Pronoun replaces a noun I, you, he, she, some

Tara is Indian. She is beautiful.

Preposition

links a noun to another word

to, at, after, on, but

We went to school on Monday.

Conjunction

joins clauses or sentences or words

and, but, when I like dogs and I like cats. I like cats and dogs. I like dogs but I don't like cats.

Interjection

short exclamation, sometimes inserted into a sentence

oh!, ouch!, hi!, well

Ouch! That hurts! Hi! How are you? Well, I don't know.

Deciphering IPA 2013

Chapter four/pronunciation 58

Verbs may be treated as two different parts of speech: o Lexical Verbs (work, like, run) o Auxiliary Verbs (be, have, must)

Determiners may be treated as a separate part of speech, instead of being categorized under Adjectives

It is important to know the basic about language such as the parts of

speech for a better approach in diction of words and pronunciation because the

basics defines the true form and the root form from sounds to words to phrases

up until sentences. If one is able to master the eight parts of speech, one would

be able to differ which is which, especially on the pronunciation of words.

Diction and pronunciation differs in a way but is related in the true sense

of it. They both make up a word.

Deciphering IPA 2013

Chapter four/pronunciation 57

4.3

Competencies:

1. Look for a short poem and recite it in front of your classmates.

2. Make an English script and perform it in front of your

classmates.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 59

Overview:

After mastering the art of pronunciation and diction, we would be able to

understand the English Language as to how it is done and taken, in terms of

phrases distinction as to what kind of English accent does the word has.

Pronunciation and diction are not the only ones that makes the whole

English process, there are many of them, and one of the most important part of

the English language is the sound articulation and manipulation. Sound

articulation and manipulation has something to do with how a sound is done and

produce by the muscles and tissues of the tongue, how it is stressed as a phrase

or word and how it does functions as a word.

There are different articulatory processes such as bilateral, glottal, and

etc. Each will explain how a complicated sound is produced by the mouth, and

how big is the contribution of the teeth in making a very smooth pronunciation

occurs. As you go along the module, you‘ll be able to know and see the mystery

behind the production of sounds.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 60

5.1

Articulatory Phonetics

We will spend the next few days studying articulatory phonetic: what is involved in the actual movement of various parts of the vocal tract during speech. (Use transparency to discuss organs of speech; oral, pharyngeal and nasal cavities; articulators, lungs and diaphragm).

All speech sounds are made in this area. None are made outside of it (such as by stomping, hand clapping, snapping of fingers, farting, etc.)

Theoretically, any sound could be used as a speech sound provided the human vocal tract is capable of producing it and the human ear capable of hearing it. Actually only a few hundred different sounds or types of sounds occur in languages known to exist today, considerably fewer than the vocal tract is capable of producing.

Thus, all speech sounds result from air being somehow obstructed or modified within the vocal tract. This involves 3 processes working together:

a) The airstream process--the source of air used in making the sound.

b) The phonation process--the behavior of the vocal cords in the glottis during the production of the sound.

c) The oro-nasal process--the modification of that flow of air in the vocal track (from the glottis to the lips and nose).

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 61

The airstream process

The first major way to categorize sounds according to phonetic features is by the source of air. Where does the air come from that is modified by the vocal organs? Languages can use any of three airstream mechanisms to produce sounds.

One airstream mechanism is by far the most important for producing sounds in the world's languages. Most sounds in the world's languages are produced by manipulating air coming into the vocal tract as it is being exhaled by the lungs, a method referred to as the pulmonic aggressive airstream mechanism. Sounds made by manipulating air as it is exhaled from the lungs are called pulmonic aggressive sounds. Virtually all sounds in English and other European languages are produced by manipulating exhaled air. And most sounds in other languages are also pulmonic aggressive.

There is another variety of this pulmonic airstream mechanism. Inhaled air can also be modified to produce speech sounds. This actually occurs in a few rare and special cases, such as in Tsou, an aboriginal language of Taiwan, which has inhaled [f] and [h] ([h5/˝ps˝] ashes; [f5/tsuju], egg). Such sounds are called pulmonic ingressive sounds, and the airstream mechanism for making such sounds is called the ingressive rather than the egressive version of the pulmonic airstream mechanism. Perhaps because it is physiologically harder to slow down an inhalation than an exhalation, pulmonic ingressive sounds are extremely rare.

The majority of the sounds in all languages of the world are pulmonic egressive

sounds. However, in addition to using air being actively exhaled (or inhaled), two

other airstream mechanisms are used to produce some of the sounds in some of

the world's languages.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 62

1) To understand the second airstream mechanism, the glottalic airstream mechanism,

let's first look at a special pulmonic egressive sound, the glottal stop. Air being exhaled

from the lungs may be stopped in the throat by a closure of the glottis. This trapping of

air by the glottis is called a glottal stop. English actually has a glottal stop in certain

exclamations: [u?ow], u?u], [a?a], and in certain dialectical pronunciations: [bottle]. The

IPA renders the glottal stop as a question mark without the period.

The glottal stop itself is an example of a pulmonic egressive sound, since air from the

lungs is being stopped. However, the glottis can be closed immediately before the

production of certain other sounds, trapping a pocket of air in the vocal tract. If this

reservoir of stationary air is then manipulated in the production of a sound it yields

another type of airstream mechanism, the glottalic airstream mechanism. Here's how it

works. First, the vocal cords completely close so that for a brief moment no air escapes

from the lungs and air is compressed in the throat (pharynx).

If the closed glottis is raised to push the air up and outward, an ejective consonant is

produced. The air is forced into the vocal tract and there manipulated by the organs of

speech. Compare glottalized vs. non-glottalized [k] in Georgian. Ejectives are found in the

languages of the Caucasus Mountains, among many Native American languages, and among

the Afroasiatic languages of north Africa (Hausa, Amharic).

If the closed glottis is lowered to create a small vacuum in the mouth, an implosive

consonant is produced. The lowering glottis acts like the downward movement of a piston to

create a brief rarification of the air in the vocal tract. When the stricture in the mouth is

released air moves into the mouth. Swahili has three implosives: [b], [d], [g]. Implosives

occur mostly in languages of east Africa, in several Amerindian languages and in some IE

languages of northern India. (Compare the difference between implosives, using the glottalic

airstream mechanism, and ingressives, which use inhaled air.)

The third and final airstream mechanism used by human language is confined to certain

languages of southwest Africa. It is called the velaric airstream mechanism. There is

regular oral articulation, while the back of tongue seals off air from the lungs and creates a

relative vacuum. Air in the mouth is rarified by backward and downward movement of the

tongue. When the stricture is released the air rushes in, creating a click oral cavity;

pulmonic air continues through the nose (one can produce a nasal hum while producing

clicks).

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 63

The phonation process

The vocal cords can be in one of several positions during the production of a sound. The muscles of the vocal cords in the glottis can behave in various ways that affect the sound. The effect of this series of vocal cord states is called the phonation process.

Voicing. Vocal cords can be narrowed along their entire length so that they vibrate as the air passes through them. All English vowels are voiced. Voiceless vowels also occur but are far rarer than voiceless consonants are much more common than voiceless vowels. Voiceless vowels usually occur between voiceless consonants, as in Japanese. No language has only voiceless vowels; a language has either only voiced vowels or voiced and a few voiceless vowels.

There are also several other vocal cord states that are used to modify sound in the world's languages. None is used as a regular feature of English.

Laryngealization. The posterior (artenoid) portion of the vocal cords can be closed to produce a laryngealized or creaky sound. This doesn't play a meaningful role in English phonology, althoght we might use a creaky voice to imitate an old witch when reading fairy tales. Some languages of Southeast Asia and Africa have creaky vowels and consonants, as in Margi, a Nigerian language: ja to give birth/ laryngealized ja thigh; or in Lango a Nilotic language: man this/ laryngealized man testicles.

Murmur

.

The anterior (ligamental) portion of the vocal cords can be closed, with the vocal

cords vibrating. This produces murmured or breathy sounds. Murmured or

breathy vowels occur in some languages of Southeast Asia. We make

murmured sounds to imitate the Darth Vader voice. In many Indo-European

languages of India the stop consonants have a murmured release; in other

words the anterior portion of the vocal cords remain closed after the stop has

been produced during part of the time the vowel is pronounced: bh, dh, gh,

Buddha

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 64

Whisper

A similar vocal cord state is used to produce the whisper. The vocal chords are narrowed but not vibrated, narrowing is more complete at the anterior end, less so at the posterior end. Whispered sounds do not contrast with non-whispered sounds to produce differences of meaning in any known language, but the whispered voice is common as a speech variant across languages. There is no IPA symbol for a whispered sound.

The oro-nasal process

Regardless of which airstream mechanism is used, speech sounds are produced when the moving air is somehow obstructed within the vocal tract. The vocal tract consists of three joined cavities: the oral cavity, the nasal cavity, and the pharyngeal cavity. The surfaces and boundaries of these cavities are known as the organs of speech. What happens to the air within these cavities is known as the oro-nasal process.

Let's talk first about the oro-nasal process in the articulation, or production, of consonants. There are two major ways to classify the activity of the speech organs in the production of consonants: place of articulation and manner of articulation.

Consonantal place of articulation

The place of articulation is defined in terms of two articulators These may be:

lips, teeth, alveolar ridge, tongue tip (apex), tongue blade (laminus), or back of

the tongue (dorsum), hard palate, soft palate (velum), uvula, glottis, pharynx,

glottis (the "voice box," or cartilaginous structure where the vocal cords are

housed).

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 65

Bilabial [b, p, m, w]

Labiodental, [f, v]

Interdental, [T, D]

(Apico)-dental the tip (or apex) of the tongue and the back teeth: Spanish [t, d, s, z].

Alveolar (apico-or lamino-) tongue and alveolar ridge (compare 'ten' vs. 'tenth'). Examples: English [t, d, s, z]

Postalveolar or palatoalveolar (apico- or lamino-) (English [S]/[Z]),

Retroflex (apico-palatal) bottom of the tongue tip and palate, or alveolar ridge: Midwest English word-initial [«] and [t, d, n] in many Dravidian languages and many languages of Australia.

Palatal (apico- or lamino-) (English [j]), [S]/[Z] in many languages

Velar or dorso-velar Eng. [k, g, N] German [x] Greek [V]

Uvular French [R], also found in many German dialects.

Pharyngeal (constriction of the sides of the throat),

Glottal (glottal stop, the vocal chords are the two articulators. cf. A-ha, bottle, Cockney English 'ave). [h] is a glottalic fricative sound.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 66

5.2

Manner of articulation

Now let's look at the ways that moving air can be blocked and modified by various speech

organs. There are several methods of modifying air when producing a consonant, and these

methods are called manners of articulation. We have already examined where the air is

blocked. Now let's look at how the air can be blocked.

1) Sounds that completely stop the stream of exhaled air are called plosives: [d], [t], [b], [p], and

[g], [k], glottal stop. Another word for plosive is stop (nasals are also stops, however, since the

air is stopped in the oral cavity during their production).

2) Sound produced by a near complete stoppage of air are called fricatives: [s], [z], [f], [v], [T],

[D], [x], [V], [h], pharyngeal.

3) Sometimes a plosive and a fricative will occur together as a single, composite sound called an

affricate: [tS], [ts], [dz], [dZ], [pf].

4) All other types of continuant are produced by relatively slight constriction of the oral cavity

and are called approximants. Approximants are those sounds that do not show the same high

degree of constriction as fricatives but are more constricted than are vowels. During the

production of an approximant, the air flow is smooth rather than turbulent. There are four types

of approximants.

a) The glottis is slightly constricted to produce [h], a glottalic approximant.

b) If slight stricture occurs between the roof of the mouth and the tongue a palatal glide is

produced [j]. If the constriction is between the two lips, a labiovelar glide is produced. The

glides [j] and [w] are also called semivowels, since they are close to vowels in degree of

blockage.

c) If the stricture is in the middle of the mouth, and the air flows out around the sides of the

tongue, a lateral is produced. Laterals, or lateral approximants, are the various l-sounds that

occur in language. In terms of phonetic features, l-sounds are + lateral, while all other sounds

are + central.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 67

d) The third type of approximant includes any of the various R-sounds that are not characterized by a flapping or trilling: alveolar and retroflex approximants. This includes the American English r (symbolized in the IPA by an upside down [®], but we will use the symbol [r]).

It the air flow is obstructed only for a brief moment by the touch of the tongue tip against the teeth or alveolar ridge, a tap, or tapped [|] is produced: cf. Am Engl ladder; British Engl. very.

If the tongue tip is actually set in motion by the flow of air so that is vibrates once, a flap or flapped r is produced: this is the sound of the Spanish single r. Flaps can even be labio-dental, as in one African language, Margi, spoken in Northern Nigeria.

If the air flow is set into turbulence several times in quick succession, a trill is produced. Trills may be alveolar, produced by the apex of the tongue: the Spanish double rr perro; the French uvular [R]: de rien; Bilabial trills [B] have been found to occur in two languages of New Guinea: mBulei = rat in Titan.

Degree of blockage

In discussing manner of articulation, it is also relevant to classify consonants according to

the total degree of blockage. Remember that all sounds that involve significant stoppage

of air in the vocal tract are known as consonants (this distinguishes them from vowel,

which are produced by very little blockage of the airstream). Consonants differ in the

manner as well as the degree to which the airstream is blocked. While we are discussing

the manner in which air is blocked, we can also classify sounds as to the degree of

blockage.

Plosives, fricatives, and affricates are all sounds made by nearly complete or complete

blockage of the airstream. For this reason they are known collectively as obstruents.

Consonants produced by less blockage of the airstream are called sonorants. With little

blockage the airstream flows out smoothly, with relatively little turbulence. There are

several types of sonorants, depending upon where the airstream is blocked in the vocal

tract and how air flows around the impediment.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 68

Sonorants are produced using the following manners of articulation:

1) Sounds produced by stoppage at the vocal tract and releases through the nose are called nasals. The nasals [m], [n], and [ng] have the same point of articulation as the plosives [d], [b], and [g], except that the velum rises and air passes freely through the nose during their production; the oral stoppage is not released. Plosives are also known as oral stops, to distinguish them from the nasal stops. All known languages have at least one nasal except for several Salishan languages spoken around the Puget Sound (including Snohomish)

The division of consonants into obstruents and sonorants is not absolute. In some languages, such as Russian, the glide [j] is produced by much more blockage and could almost as easily be called a fricative.

Also, some l- and r- sounds are definitely fricatives rather than approximants. Some types of l- and r-sounds are characterized by a highly turbulent flow of air over the tongue, even more than for the trilled [r].

In Czech, besides the regular flapped r, there is a strident trilled and tensed [r] which is much more like an obstruent than a sonorant. Navaho has a fricative [tl] which is definitely more fricative than approximant.

Because all l- and r- sounds (whether approximant and non-approximant) are produced in the same way--with the the air flowing around or over the tongue like water moving around a solid object--there is a collective term for these sounds: liquids. Liquids and nasals are sometimes able to carry a syllable. Syllabic r and l occur in Czech and Slovak: StrC prst skrz krk. The IPA uses a dot beneath them to signify syllabicity.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 69

Secondary articulation features in consonants

Lack of release. Plosives may not be released fully when pronounced at the end of words. This occurs with English [p} b}, t}, d}, k}, g}]

Length. Consonants may be relatively long or short. Long consonants and vowels are common throughout the world, cf. Finnish, Russian: zhech/szhech to burn; Italian: pizza, spaghetti. Long or double consonants are also known as geminate consonants and are indicated in the IPA by the symbol […]. Geminate plosives and affricates are also known as delayed release consonants.

Nasal release. In certain African languages: [dn].

Palatalization. Concomitant raising of the blade of the tongue toward the palate: cannon/canyon, do/dew; common among the sounds of Russian and other East-European languages: mat/mat' luk/lyuk. There are thousands of such doublets in Russian.

Labialization. Concomitant lip rounding cf. sh in shoe vs. she (IPA uses a superscript w to transcribe labialization) In some languages of Africa the constrast between labialized and non-labialized sounds signal differences in meaning, as in Twi: ofa´ he finds/ ofwa´ snail.

Velarization. The dorsum of the tongue is raised slightly. Compare the l in wall, all (velarized or dark l) vs. like, land (continental or light l). The glide [w] is also slightly velarized. In Russian all non-palatalized consonants are velarized.

Pharyngealization. Concomitant constriction of throat. Afroasiatic languages of north Africa, such as Berber: zurn they are fat/ zghurn they made a pilgrimage.

Tensing. The muscles of the articulators can be or lax when pronouncing a sound. Cf. Korean stops: Lax unvoiced p, lax voiced b, tense unvoiced pp. Tensing also occurs in the vocal cords during the production of tensed stops, so tenseness could also have been listed under phonation processes.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 70

5.3

Place of articulation

Places of articulation (passive & active):

1. Exo-labial, 2. Endo-labial, 3. Dental, 4. Alveolar, 5. Post-alveolar, 6. Pre-palatal, 7.

Palatal, 8. Velar, 9. Uvular, 10. Pharyngeal, 11. Glottal, 12. Epiglottal, 13. Radical, 14.

Postero-dorsal, 15. Antero-dorsal, 16. Laminal, 17. Apical, 18. Sub-apical

In articulatory phonetics, the place of articulation (also point of articulation) of a consonant is the point of contact where an obstruction occurs in the vocal tract between an articulatory gesture, an active articulator (typically some part of the tongue), and a passive location (typically some part of the roof of the mouth). Along with the manner of articulation and the phonation, this gives the consonant its distinctive sound.

The terminology in this article has been developed to precisely describe all the consonants in all the world's spoken languages. No known language distinguishes all of the places described here, so less precision is needed to distinguish the sounds of a particular language.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 71

The passive place of articulation is the place on the more stationary part of the vocal tract where the articulation occurs. It can be anywhere from the lips, upper teeth, gums, or roof of the mouth to the back of the throat. Although it is a continuum, there are several contrastive areas such that languages may distinguish consonants by articulating them in different areas, but few languages will contrast two sounds within the same area unless there is some other feature which contrasts as well. The following areas are contrastive:

The upper lip (labial) The upper teeth, either on the edge of the teeth or inner surface (dental) The alveolar ridge, the gum line just behind the teeth (alveolar) The back of the alveolar ridge (post-alveolar) The hard palate on the roof of the mouth (palatal) The soft palate further back on the roof of the mouth (velar) The uvula hanging down at the entrance to the throat (uvular) The throat itself, AKA the pharynx (pharyngeal) The epiglottis at the entrance to the windpipe, above the voice box (epiglottal)

These regions are not strictly separated. For instance, in many languages the surface of the tongue contacts a relatively large area from the back of the upper teeth to the alveolar ridge; this is common enough to have received its own name, denti-alveolar. Likewise, the alveolar and post-alveolar regions merge into each other, as do the hard and soft palate, the soft palate and the uvula, and indeed all adjacent regions. Terms like pre-velar (intermediate between palatal and velar), post-velar (between velar and uvular), and upper vs lower pharyngeal may be used to specify more precisely where an articulation takes place. However, although a language may contrast pre-velar and post-velar sounds, it will not also contrast them with palatal and uvular sounds (of the same type of consonant), so that contrasts are limited to the number above if not always their exact location.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 72

Place of articulation (active)

The articulatory gesture of the active place of articulation involves the more mobile part of the vocal tract. This is typically some part of the tongue or lips. The following areas are known to be contrastive:

The lower lip (labial) Various parts of the front of the tongue:

o The tip of the tongue (apical) o The upper front surface of the tongue just behind the tip, called the blade

of the tongue (laminal) o The surface of the tongue under the tip (subapical)

The body of the tongue (dorsal) The base AKA root of the tongue in the throat (radical) The epiglottis, the flap at the base of the tongue (epiglottal) The aryepiglottic folds at the entrance to the larynx (also epiglottal) The glottis (laryngeal)

In bilabial consonants both lips move, so the articulatory gesture is bringing together the lips, but by convention the lower lip is said to be active and the upper lip passive. Similarly, in linguolabial consonants the tongue contacts the upper lip with the upper lip actively moving down to meet the tongue; nonetheless, in this gesture the tongue is conventionally said to be active and the lip passive, if for no other reason than the fact that the parts of the mouth below the vocal tract are typically active, and those above the vocal tract typically passive.

In dorsal gestures different parts of the body of the tongue contact different parts of the roof of the mouth, but this cannot be independently controlled, so they are all subsumed under the term dorsal. This is unlike coronal gestures involving the front of the tongue, which is more flexible.

The epiglottis may be active, contacting the pharynx, or passive, being contacted by the aryepiglottal folds. Distinctions made in these laryngeal areas are very difficult to observe and are the subject of ongoing investigation, with several as-yet unidentified combinations thought possible.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 73

The glottis acts upon itself. There is a sometimes fuzzy line between glottal, aryepiglottal, and epiglottal consonants and phonation, which uses these same areas.

Unlike the passive articulation, which is a continuum, there are five discrete active articulators: the lip (labial consonants), the flexible front of the tongue (coronal consonants: laminal, apical, and subapical), the middle–back of the tongue (dorsal consonants), the root of the tongue together with the epiglottis (radical consonants), and the larynx (laryngeal consonants). These articulators are discrete in that they can act independently of each other, and two or more may work together in what is called coarticulation (see below). The distinction between the various coronal articulations, laminal, apical, and subapical, are however a continuum without clear boundaries.

Homorganic consonants

Consonants that have the same place of articulation, such as the alveolar sounds /n, t,

d, s, z, l/ in English, are said to be homorganic. Similarly, labial /p, b, m/ and velar /k, ɚ ,

ŋ/ are homorganic. A homorganic nasal rule, an instance of assimilation, operates in

many languages, where a nasal consonant must be homorganic with a following stop.

We see this with English intolerable but implausible; another example is found in

Yoruba, where the present tense of ba "hide" is mba "is hiding", while the present of sun

"sleep" is nsun "is sleeping".

Central and lateral articulation

The tongue contacts the mouth with a surface, which has two dimensions: length and width. So far only points of articulation along its length have been considered. However, articulation varies along its width as well. When the airstream is directed down the center of the tongue, the consonant is said to be central. If, however, it is deflected off to one side, escaping between the side of the tongue and the side teeth, it is said to be lateral. Nonetheless, for simplicity's sake the place of articulation is assumed to be the point along the length of the tongue, and the consonant may in addition be said to be central or lateral. That is, a consonant may be lateral alveolar, like English /l/ (the tongue contacts the alveolar ridge, but allows air to flow off to the side), or lateral palatal, like Castilian Spanish ll /ʎ /. Some Indigenous Australian languages contrast dental, alveolar, retroflex, and palatal laterals, and many Native American languages have lateral fricatives and affricates as well.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 74

Coarticulation

Some languages have consonants with two simultaneous places of articulation, called coarticulation. When these are doubly articulated, the articulators must be independently movable, and therefore there may only be one each from the major categories labial, coronal, dorsal, radical, and laryngeal.

The only common doubly articulated consonants are labial–velar stops like p], [ b], and less commonly m], which are found throughout West and Central Africa. Other combinations are rare. They include labial–(post)alveolar stops m], found as distinct consonants only in a single language in New Guinea, and a uvular–epiglottal stop, ʡ ], found in Somali.

More commonly, coarticulation involves secondary articulation of an approximantic nature, in which case both articulations can be similar, such as labialized labial [mʷ ] or palatalized velar [kʲ ]. This is the case of English [w], which is a velar consonant with secondary labial articulation.

Common coarticulations include:

Labialization, rounding the lips while producing the obstruction, as in [kʷ ] and English [w].

Palatalization, raising the body of the tongue toward the hard palate while producing the obstruction, as in Russian [tʲ ] and [ɓ ].

Velarization, raising the back of the tongue toward the soft palate (velum), as in the English dark el, [lˠ ] (also transcribed [ɫ ]).

Pharyngealization, constriction of the throat (pharynx), such as Arabic "emphatic" [tˤ ].

Production of vowels

A vowel is any phoneme in which airflow is impeded only or mostly by the

voicing action of the vocal cords. The well-defined fundamental frequency

provided by the vocal cords in voiced phonemes is only a convenience, however,

not a necessity, since a strictly unvoiced whisper is still quite intelligible. Our

interest is therefore most focused on further modulations of and additions to the

fundamental tone by other parts of the vocal apparatus, determined by the

variable dimensions of oral, pharyngeal, and even nasal cavities.

Deciphering IPA 2013

Chapter five/sound articulation and manipulation 75

5.4

Competencies:

1. Find a partner. Discuss and execute the different sound articulation and as to

their places of articulation.

2. Group yourself into two. Each group will formulate a quick demonstration on

how words and sound are made in the mouth.

Chapter six/communication 77

Overview:

Communication is a very essential tool in achieving a healthy living. It is our

primary way of extending ourselves to others and keeping a small distance, thus

incorporates oneness and unity between nations. It has been our weapon in making our

country connected to the other all through these years.

It keeps us strong in standing as a person in our lives no matter where we go.

Socialization has been proven a great contributor in the wellness of a country and

socialization would not exist without communication.

Communication has different identities, has different forms and different

characteristics and ways. These differences will be learned and tackled in this chapter

to provide proper information on the said topic, thus, be able to have and perform a very

spontaneous way of communicating.

Deciphering IPA 2013

Chapter six/communication 78

6.1

Communication (from Latin commūnicāre, meaning "to share") is the activity of conveying information through the exchange of thoughts, messages, or information, as by speech, visuals, signals, writing, or behavior. It is the meaningful exchange of information between two or a group of living creatures. Pragmatics defines communication as any sign-mediated interaction that follows combinatorial, context-specific and content-coherent rules. Communicative competence designates the capability to install intersubjective interactions, which means that communication is an inherent social interaction.

One definition of communication is ―any act by which one person gives to or receives from another person, information about that person's needs, desires, perceptions, knowledge, or affective states. Communication may be intentional or unintentional, may involve conventional or unconventional signals, may take linguistic or non-linguistic forms, and may occur through spoken or other modes.‖

Communication requires a sender, a message, and a recipient, although the receiver doesn't have to be present or aware of the sender's intent to communicate at the time of communication; thus communication can occur across vast distances in time and space. Communication requires that the communicating parties share an area of communicative commonality. The communication process is complete once the receiver has understood the message of the sender. Communicating with others involves three primary steps: ◦Thought: First, information exists in the mind of the sender. This can be a concept, idea, information, or feelings. ◦Encoding: Next, a message is sent to a receiver in words or other symbols. ◦Decoding: Lastly, the receiver translates the words or symbols into a concept or information that he or she can understand.

Deciphering IPA 2013

Chapter six/communication 79

Human communication

Human spoken and pictorial languages can be described as a system of symbols (sometimes known as lexemes) and the grammars (rules) by which the symbols are manipulated. The word "language" also refers to common properties of languages. Language learning normally occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them. Languages seem to share certain properties although many of these include exceptions. There is no defined line between a language and a dialect. Constructed languages such as Esperanto, programming languages, and various mathematical formalisms are not necessarily restricted to the properties shared by human languages. Communication is the flow or exchange of information within people or a group of people.

A variety of verbal and non-verbal means of communicating exists such as body language, eye contact, sign language, haptic communication, chronemics, and media content such as pictures, graphics, sound, and writing.

Convention on the Rights of Persons with Disabilities also defines the communication to include the display of text, Braille, tactile communication, large print, accessible multimedia, as well as written and plain language, human-reader, augmentative and alternative modes, means and formats of communication, including accessible information and communication technology.[3] Feedback is critical to effective communication between participants.

Nonverbal communication describes the process of conveying meaning in the form of non-word messages. Some forms of non verbal communication include chronemics, haptic, gesture, body language or posture, facial expression and eye contact, object communication such as clothing, hairstyles, architecture, symbols, info graphics, and tone of voice, as well as through an aggregate of the above. Speech also contains nonverbal elements known as paralanguage. These include voice lesson quality, emotion and speaking style as well as prosodic features such as rhythm, intonation and stress. Research has shown that up to 55% of spoken communication may occur through non verbal facial expressions, and a further 38% through paralanguage.[4] Likewise, written texts include nonverbal elements such as handwriting style, spatial arrangement of words and the use of emoticons to convey emotional expressions in pictorial form.

Deciphering IPA 2013

Chapter six/communication 80

Oral communication

Oral communication, while primarily referring to spoken verbal communication, can also employ visual aids and non-verbal elements to support the conveyance of meaning. Oral communication includes speeches, presentations, discussions, and aspects of interpersonal communication. As a type of face-to-face communication, body language and choice tonality play a significant role, and may have a greater impact upon the listener than informational content. This type of communication also garners immediate feedback.

Business communication

A business can flourish only when all objectives of the organization are achieved

effectively. For efficiency in an organization, all the people of the organization must be

able to convey their message properly

Written communication and its historical development.

The progression of written communication can be divided into three revolutionary stages called "Information Communication Revolutions".[5] During the first stage, written communication first emerged through the use of pictographs. The pictograms were made in stone; hence written communication was not yet mobile. During the second stage, writing began to appear on paper, papyrus, clay, wax, etc. with common alphabets. The third stage is characterized by the transfer of information through controlled waves of electromagnetic radiation (i.e., radio, microwave, infrared) and other electronic signals.

Communication is thus a process by which meaning is assigned and conveyed in an attempt to create shared understanding. This process, which requires a vast repertoire of skills in interpersonal processing, listening, observing, speaking, questioning, analyzing, gestures, and evaluating enables collaboration and cooperation.[6]

Misunderstandings can be anticipated and solved through formulations, questions and

answers, paraphrasing, examples, and stories of strategic talk. Written communication

can be clarified by planning follow-up talks on critical written communication as part of

the every-day way of doing business. A few minutes spent talking in the present will

save valuable time later by avoiding misunderstandings in advance. A frequent method

for this purpose is reiterating what one heard in one's own words and asking the other

person if that really was what was meant.

Deciphering IPA 2013

Chapter six/communication 81

Effective communication

Effective communication occurs when a desired effect is the result of intentional or unintentional information sharing, which is interpreted between multiple entities and acted on in a desired way. This effect also ensures the message is not distorted during the communication process. Effective communication should generate the desired effect and maintain the effect, with the potential to increase the effect of the message. Therefore, effective communication serves the purpose for which it was planned or designed. Possible purposes might be to elicit change, generate action, create understanding, inform or communicate a certain idea or point of view. When the desired effect is not achieved, factors such as barriers to communication are explored, with the intention being to discover how the communication has been ineffective.

6.2

Barriers to effective human communication

Barriers to effective communication can retard or distort the message and intention of the message being conveyed which may result in failure of the communication process or an effect that is undesirable. These include filtering, selective perception, information overload, emotions, language, silence, communication apprehension, gender differences and political correctness.

This also includes a lack of expressing "knowledge-appropriate" communication, which occurs when a person uses ambiguous or complex legal words, medical jargon, or descriptions of a situation or environment that is not understood by the recipient.

Physical barriers

Physical barriers are often due to the nature of the environment. An example of this is

the natural barrier which exists if staffs are located in different buildings or on different

sites. Likewise, poor or outdated equipment, particularly the failure of management to

introduce new technology, may also cause problems. Staff shortages are another factor

which frequently causes communication difficulties for an organization. While

distractions like background noise, poor lighting or an environment which is too hot or

cold can all affect people's morale and concentration, which in turn interfere with

effective communication.

Deciphering IPA 2013

Chapter six/communication 82

System design

System design faults refer to problems with the structures or systems in place in an organization. Examples might include an organizational structure which is unclear and therefore makes it confusing to know whom to communicate with. Other examples could be inefficient or inappropriate information systems, a lack of supervision or training, and a lack of clarity in roles and responsibilities which can lead to staff being uncertain about what is expected of them.

Attitudinal barriers

Attitudinal barriers come about as a result of problems with staff in an organization. These may be brought about, for example, by such factors as poor management, lack of consultation with employees, personality conflicts which can result in people delaying or refusing to communicate, the personal attitudes of individual employees which may be due to lack of motivation or dissatisfaction at work, brought about by insufficient training to enable them to carry out particular tasks, or just resistance to change due to entrenched attitudes and ideas, it may be as a result delay in payment at the end of the month.

Ambiguity of words/phrases

Words sounding the same but having different meaning can convey a different meaning altogether. Hence the communicator must ensure that the receiver receives the same meaning. It is better if such words are avoided by using alternatives whenever possible.

Individual linguistic ability

The use of jargon, difficult or inappropriate words in communication can prevent the

recipients from understanding the message. Poorly explained or misunderstood

messages can also result in confusion. However, research in communication has shown

that confusion can lend legitimacy to research when persuasion fails.

Physiological barriers

These may result from individuals' personal discomfort, caused—for example—by ill

health, poor eyesight or hearing difficulties.

Deciphering IPA 2013

Chapter six/communication 83

Presentation of information

Presentation of information is important to aid understanding. Simply put, the communicator must consider the audience before making the presentation itself and in cases where it is not possible the presenter can at least try to simplify his/her vocabulary so that the majority can understand.

Nonhuman communication

Every information exchange between living organisms — i.e. transmission of signals that involve a living sender and receiver can be considered a form of communication; and even primitive creatures such as corals are competent to communicate. Nonhuman communication also include cell signaling, cellular communication, and chemical transmissions between primitive organisms like bacteria and within the plant and fungal kingdoms.

Animal communication

The broad field of animal communication encompasses most of the issues in ethology. Animal communication can be defined as any behavior of one animal that affects the current or future behavior of another animal. The study of animal communication, called zoo semiotics (distinguishable from anthroposemiotics, the study of human communication) has played an important part in the development of ethology, sociobiology, and the study of animal cognition. Animal communication, and indeed the understanding of the animal world in general, is a rapidly growing field, and even in the 21st century so far, a great share of prior understanding related to diverse fields such as personal symbolic name use, animal emotions, animal culture and learning, and even sexual conduct, long thought to be well understood, has been revolutionized.

Deciphering IPA 2013

Chapter six/communication 85

Berlo's Sender-Message-Channel-Receiver Model of Communication

Linear Communication Model

The first major model for communication was introduced by Claude Shannon and Warren Weaver for Bell Laboratories in 1949[15] The original model was designed to mirror the functioning of radio and telephone technologies. Their initial model consisted of three primary parts: sender, channel, and receiver. The sender was the part of a telephone a person spoke into, the channel was the telephone itself, and the receiver was the part of the phone where one could hear the other person. Shannon and Weaver also recognized that often there is static that interferes with one listening to a telephone conversation, which they deemed noise.

In a simple model, often referred to as the transmission model or standard view of communication, information or content (e.g. a message in natural language) is sent in some form (as spoken language) from an emisor/ sender/ encoder to a destination/ receiver/ decoder. This common conception of communication simply views communication as a means of sending and receiving information. The strengths of this model are simplicity, generality, and quantifiability. Social scientists Claude Shannon and Warren Weaver structured this model based on the following elements:

1. An information source, which produces a message. 2. A transmitter, which encodes the message into signals 3. A channel, to which signals are adapted for transmission 4. A receiver, which 'decodes' (reconstructs) the message from the signal. 5. A destination, where the message arrives.

Deciphering IPA 2013

Chapter six/communication 86

In 1960, David Berlo expanded on Shannon and Weaver's (1949) linear model of communication and created the SMCR Model of Communication.[17] The Sender-Message-Channel-Receiver Model of communication separated the model into clear parts and has been expanded upon by other scholars.

Communication is usually described along a few major dimensions: Message (what type of things are communicated), source / emisor / sender / encoder (by whom), form (in which form), channel (through which medium), destination / receiver / target / decoder (to whom), and Receiver. Wilbur Schram (1954) also indicated that we should also examine the impact that a message has (both desired and undesired) on the target of the message.[18] Between parties, communication includes acts that confer knowledge and experiences, give advice and commands, and ask questions. These acts may take many forms, in one of the various manners of communication. The form depends on the abilities of the group communicating. Together, communication content and form make messages that are sent towards a destination. The target can be oneself, another person or being, another entity (such as a corporation or group of beings).

In light of these weaknesses, Barnlund (2008) proposed a transactional model of communication.[19] The basic premise of the transactional model of communication is that individuals are simultaneously engaging in the sending and receiving of messages.

In a slightly more complex form a sender and a receiver are linked reciprocally. This second attitude of communication, referred to as the constitutive model or constructionist view, focuses on how an individual communicates as the determining factor of the way the message will be interpreted. Communication is viewed as a conduit; a passage in which information travels from one individual to another and this information becomes separate from the communication itself. A particular instance of communication is called a speech act. The sender's personal filters and the receiver's personal filters may vary depending upon different regional traditions, cultures, or gender; which may alter the intended meaning of message contents. In the presence of "communication noise" on the transmission channel (air, in this case), reception and decoding of content may be faulty, and thus the speech act may not achieve the desired effect. One problem with this encode-transmit-receive-decode model is that the processes of encoding and decoding imply that the sender and receiver each possess something that functions as a codebook, and that these two code books are, at the very least, similar if not identical. Although something like code books is implied by the model, they are nowhere represented in the model, which creates many conceptual difficulties.

Deciphering IPA 2013

Chapter six/communication 87

Theories of coregulation describe communication as a creative and dynamic continuous process, rather than a discrete exchange of information. Canadian media scholar Harold Innis had the theory that people use different types of media to communicate and which one they choose to use will offer different possibilities for the shape and durability of society (Wark, McKenzie 1997). His famous example of this is using ancient Egypt and looking at the ways they built themselves out of media with very different properties stone and papyrus. Papyrus is what he called 'Space Binding'. it made possible the transmission of written orders across space, empires and enables the waging of distant military campaigns and colonial administration. The other is stone and 'Time Binding', through the construction of temples and the pyramids can sustain their authority generation to generation, through this media they can change and shape communication in their society (Wark, McKenzie 1997).

Communication noise

In any communication model, noise is interference with the decoding of messages sent over a channel by an encoder. There are many examples of noise:

Environmental noise

Noise that physically disrupts communication, such as standing next to loud speakers at a party, or the noise from a construction site next to a classroom making it difficult to hear the professor.

Physiological-impairment noise

Physical maladies that prevent effective communication, such as actual deafness or blindness preventing messages from being received as they were intended.

Semantic noise

Different interpretations of the meanings of certain words. For example, the word "weed" can be interpreted as an undesirable plant in a yard, or as a euphemism for marijuana.

Syntactical noise

Mistakes in grammar can disrupt communication, such as abrupt changes in verb tense during a sentence.

Deciphering IPA 2013

Chapter six/communication 88

Organizational noise

Poorly structured communication can prevent the receiver from accurate interpretation. For example, unclear and badly stated directions can make the receiver even more lost.

Cultural noise

Stereotypical assumptions can cause misunderstandings, such as unintentionally offending a non-Christian person by wishing them a "Merry Christmas".

Psychological noise

Certain attitudes can also make communication difficult. For instance, great anger or sadness may cause someone to lose focus on the present moment. Disorders such as Autism may also severely hamper effective communication.

6.4

There are mainly four types of communication which are used in various ways to convey the final message to the receiver.

Verbal Communication

Deciphering IPA 2013

Chapter six/communication 89

6.5 Verbal communication includes sounds, words, language, and speech. Speaking is an effective way of communicating and helps in expressing our emotions in words. This form of communication is further classified into four types, which are: 1. Intrapersonal Communication This form of communication is extremely private and restricted to ourselves. It includes the silent conversations we have with ourselves, wherein we juggle roles between the sender and receiver who are processing our thoughts and actions. This process of communication when analyzed can either be conveyed verbally to someone or stay confined as thoughts. 2. Interpersonal Communication This form of communication takes place between two individuals and is thus a one-on-one conversation. Here, the two individuals involved will swap their roles of sender and receiver in order to communicate in a clearer manner. 3. Small Group Communication This type of communication can take place only when there are more than two people involved. Here the number of people will be small enough to allow each participant to interact and converse with the rest. Press conferences, board meetings, and team meetings are examples of group communication. Unless a specific issue is being discussed, small group discussions can become chaotic and difficult to interpret by everybody. This lag in understanding information completely can result in miscommunication. 4. Public Communication This type of communication takes place when one individual addresses a large gathering of people. Election campaigns and public speeches are example of this type of communication. In such cases, there is usually a single sender of information and several receivers who are being addressed.

Nonverbal Communication

Deciphering IPA 2013

Chapter six/communication 90

➜ Nonverbal communication manages to convey the sender's message without having

to use words.

➜ This form of communication supersedes all other forms because of its usage and

effectiveness. Nonverbal communication involves the use of physical ways of communication, such as tone of the voice, touch, and expressions.

➜ Symbols and sign language are also included in nonverbal communication. Body

posture and language convey a lot of nonverbal messages when communicating verbally with someone.

➜ Folded arms and crossed legs are some of the defensive nonverbal signals

conveyed by people. Shaking hands, patting and touching, express feelings of intimacy. Facial expressions, gestures and eye contact are all different ways of communication. Creative and aesthetic nonverbal forms of communication include music, dancing and sculpturing.

Written Communication

Deciphering IPA 2013

Chapter six/communication 91

➜ Written communication is the medium through which the message of the sender is

conveyed with the help of written words.

➜ Letters, personal journals, e-mails, reports, articles, and memos are some forms of

written communication.

➜ Unlike other forms of communication, written messages can be edited and rectified

before it is communicated to the receiver. Thereby, making written communication an indispensable part of informal and formal communication.

➜ This form of communication encapsulates features of visual communication as well,

especially when the messages are conveyed through electronic devices such as laptops, phones, and visual presentations that involve the use of text or words.

Visual Communication

➜ This form of communication involves the visual display of information, wherein the

message is understood or expressed with the help of visual aids. For example, topography, photography, signs, symbols, maps, colors, posters, banners and designs help the viewer understand the message visually.

➜ The greatest example of visual communication is the World Wide Web which

communicates with the masses, using a combination of text, design, links, images, and color. All of these visual features require us to view the screen in order to understand the message being conveyed.

Deciphering IPA 2013

Chapter six/communication 92

6.6 The main components of communication process are as follows: 1.Context - Communication is affected by the context in which it takes place. This context may be physical, social, chronological or cultural. Every communication proceeds with context. The sender chooses the message to communicate within a context. 2. Sender / Encoder - Sender / Encoder is a person who sends the message. A sender makes use of symbols (words or graphic or visual aids) to convey the message and produce the required response. For instance - a training manager conducting training for new batch of employees. Sender may be an individual or a group or an organization. The views, background, approach, skills, competencies, and knowledge of the sender have a great impact on the message. The verbal and non verbal symbols chosen are essential in ascertaining interpretation of the message by the recipient in the same terms as intended by the sender. 3. Message - Message is a key idea that the sender wants to communicate. It is a sign that elicits the response of recipient. Communication process begins with deciding about the message to be conveyed. It must be ensured that the main objective of the message is clear. 4. Medium - Medium is a means used to exchange / transmit the message. The sender must choose an appropriate medium for transmitting the message else the message might not be conveyed to the desired recipients. The choice of appropriate medium of communication is essential for making the message effective and correctly interpreted by the recipient. This choice of communication medium varies depending upon the features of communication. For instance - Written medium is chosen when a message has to be conveyed to a small group of people, while an oral medium is chosen when spontaneous feedback is required from the recipient as misunderstandings are cleared then and there. 5. Recipient / Decoder - Recipient / Decoder is a person for whom the message is intended / aimed / targeted. The degree to which the decoder understands the message is dependent upon various factors such as knowledge of recipient, their responsiveness to the message, and the reliance of encoder on decoder. 6. Feedback - Feedback is the main component of communication process as it permits

the sender to analyse the efficacy of the message. It helps the sender in confirming the correct interpretation of message by the decoder. Feedback may be verbal (through words) or non-verbal (in form of smiles, sighs, etc.). It may take written form also in form of memos, reports, etc.

Deciphering IPA 2013

Chapter six/communication 93

6.7

Competencies:

1. Make a chart describing the flow of communication process. 2. In a piece of ¼ illustration board, illustrate the different types of communication. 3. Make a speech and deliver it in front of your classmates. 4. Grab a partner. List the things that you love and recite it in front of your

classmates.

Deciphering IPA 2013

Chapter seven/non-verbal communication 95

Overview:

In communicating, we extend our ideas to others for them to understand us and

for them to also extend to us. It is delivered in many ways. We communicate in different

manners; we use words and symbols to relay our ideas to others.

The communication that uses words and sound is the most common and basic way of

communicating, in fact, this has become our primary basis of how to communicate., but

there is another way of expressing our ideas to others through the use of only gestures

and symbols. This is what we call the non-verbal communication.

Non-verbal communication is the type of communication that only uses symbols and

gestures to express ideas, feelings and emotion. This has become very useful to those

people that do not have the speaking skills. In this type of communication, symbols are

the one‘s who extends the idea to others. The common examples of this are the road

signs, paintings and sculptures.

We would be able to know to know all about non-verbal communication as we go on the

chapter.

Deciphering IPA 2013

Chapter seven/non-verbal communication 96

7.1

Nonverbal communication is the process of communication through sending and receiving wordless (mostly visual) cues between people.

Messages can be communicated through gestures and touch, body language or posture, physical distance, facial expression and eye contact, which are all types of nonverbal communication. Speech contains nonverbal elements known as paralanguage, including voice quality, rate, pitch, volume, and speaking style, as well as prosodic features such as rhythm, intonation, and stress. Likewise, written texts have nonverbal elements such as handwriting style, spatial arrangement of words, or the physical layout of a page. However, much of the study of nonverbal communication has focused on face-to-face interaction, where it can be classified into three principal areas: environmental conditions where communication takes place, physical characteristics of the communicators, and behaviors of communicators during interaction.

History

The first scientific study of nonverbal communication was Charles Darwin's book The Expression of the Emotions in Man and Animals.[3] He argued that all mammals reliably show emotion in their faces. Seventy years later Silvan Tomkins (1911–1991) began his classic studies on human emotions in Affects Imagery Consciousness volumes 1-4. Rudolf Laban (1879–1958) and Warren Lamb (1923- ) raised body movement analysis in the world of dance to a high level. Studies now range across a number of fields, including, linguistics, semiotics and social psychology. Another large influence in nonverbal communication was Ray Birdwhistell, who "pioneered the original study of nonverbal communication—what he called 'kinesics.' He estimated that the average person actually speaks words for a total of about ten or eleven minutes a day and that the average sentence takes only about 2.5 seconds. Birdwhistell also estimated we can make and recognize around 250,000 facial expressions."

Deciphering IPA 2013

Chapter seven/non-verbal communication 97

Posture

There are many different types of posture, including slouching, towering, legs spread,

jaw thrust, shoulders forward, and arm crossing. Posture or a person's bodily stance

communicates a variety of messages. Posture can be used to determine a participant's

degree of attention or involvement, the difference in status between communicators,

and the level of fondness a person has for the other communicator, depending on body

"openness".[4] Studies investigating the impact of posture on interpersonal relationships

suggest that mirror-image congruent postures, where one person's left side is parallel to

the other person's right side, leads to favorable perception of communicators and

positive speech; a person who displays a forward lean or decreases a backward lean

also signifies positive sentiment during communication.[5]

Posture can be situation-relative, that is, people will change their posture depending on

the situation they are in.

Clothing

Clothing is one of the most common forms of non-verbal communication. The study of clothing and other objects as a means of non-verbal communication is known as artifactics or objectics. The types of clothing that an individual wears convey nonverbal cues about his or her personality, background and financial status, and how others will respond to them.[3] An individual's clothing style can demonstrate their culture, mood, and level of confidence, interests, age, authority, values/beliefs, and their sexual identity.

A study of the clothing worn by women attending discothèques, carried out in Vienna,

Austria, showed that in certain groups of women (especially women who were without

their partners), motivation for sex and levels of sexual hormones were correlated with

aspects of their clothing, especially the amount of skin displayed and the presence of

sheer clothing

Deciphering IPA 2013

Chapter seven/non-verbal communication 98

Gestures

Gestures may be made with the hands, arms or body, and also include movements of

the head, face and eyes, such as winking, nodding, or rolling one's eyes. Although the

study of gesture is still in its infancy, some broad categories of gestures have been

identified by researchers. The most familiar are the so-called emblems or quotable

gestures. These are conventional, culture-specific gestures that can be used as

replacement for words, such as the hand wave used in western cultures for "hello" and

"goodbye." A single emblematic gesture can have a very different significance in

different cultural contexts, ranging from complimentary to highly offensive.[10] For a list of

emblematic gestures, see List of gestures. There are some universal gestures like the

shoulder shrug.[3]

Gestures can also be categorized as either speech independent or speech related.

Speech-independent gestures are dependent upon culturally accepted interpretation

and have a direct verbal translation.[4] A wave or a peace sign are examples of speech-

independent gestures. Speech-related gestures are used in parallel with verbal speech;

this form of nonverbal communication is used to emphasize the message that is being

communicated. Speech-related gestures are intended to provide supplemental

information to a verbal message such as pointing to an object of discussion.

Facial expressions, more than anything, serve as a practical means of communication.

With all the various muscles that precisely control mouth, lips, eyes, nose, forehead,

and jaw, human faces are estimated to be capable of more than ten thousand different

expressions. This versatility makes non-verbal of the face extremely efficient and

honest, unless deliberately manipulated. In addition, many of these emotions, including

happiness, sadness, anger, fear, surprise, disgust, shame, anguish and interest are

universally recognized.[11]

Displays of emotions can generally be categorized into two groups: negative and

positive. Negative emotions usually manifest as increased tension in various muscle

groups: tightening of jaw muscles, furrowing of forehead, squinting eyes, or lip occlusion

(when the lips seemingly disappear). In contrast, positive emotions are revealed by the

loosening of the furrowed lines on the forehead, relaxation of the muscles around the

mouth, and widening of the eye area. When individuals are truly relaxed and at ease,

the head will also tilt to the side, exposing our most vulnerable area, the neck. This is a

high-comfort display, often seen during courtship that is nearly impossible to mimic

when tense or suspicious.

Deciphering IPA 2013

Chapter seven/non-verbal communication 99

Engagement

Information about the relationship and affect of these two skaters is communicated by their body posture, eye gaze and physical contact.

Eye contact is when two people look at each other's eyes at the same time; it is the primary nonverbal way we indicate engagement, interest, attention and involvement. Studies have found that people use their eyes to indicate interest. This includes frequently recognized actions of winking and movements of the eyebrows.[citation needed]

Men and women have different ways of eye contact. Men stare at the women they are interested in, whereas women tend to always keep their eyes roaming around the room to see who is there. Disinterest is highly noticeable when showing little eye contact in a social setting. Pupils dilate when they are interested in the other person. People, sometimes without consciously doing so, probe each other's eyes and faces for positive or negative mood signs. Generally speaking, the longer there is established eye contact between two people, the greater the intimacy levels.[1]

Deciphering IPA 2013

Chapter seven/non-verbal communication 100

According to Eckman, "Eye contact (also called mutual gaze) is another major channel of nonverbal communication. The duration of eye contact is its most meaningful aspect."[13] Gaze comprises the actions of looking while talking and listening. The length of a gaze, the frequency of glances, patterns of fixation, pupil dilation, and blink rate are all important cues in nonverbal communication. "Liking generally increases as mutual gazing increases." Along with the detection of disinterest, deceit can also be observed in a person. Hogan states "when someone is being deceptive their eyes tend to blink a lot more. Eyes act as leading indicator of truth or deception," Eye aversion is the avoidance of eye contact. Eye contact and facial expressions provide important social and emotional information. Overall, as Pease states, "Give the amount of eye contact that makes everyone feel comfortable. Unless looking at others is a cultural no-no, lookers gain more credibility than non-lookers"

In concealing deception, nonverbal communication makes it easier to lie without being

revealed. This is the conclusion of a study where people watched made-up interviews of

persons accused of having stolen a wallet. The interviewees lied in about 50% of the

cases. People had access to either written transcript of the interviews, or audio tape

recordings, or video recordings. The more clues that were available to those watching,

the larger was the trend that interviewees who actually lied were judged to be truthful.

That is, people that are clever at lying can use voice tone and face expression to give

the impression that they are truthful. Contrary to popular belief, a liar does not always

avoid eye contact. In an attempt to be more convincing, liars deliberately made more

eye contact with interviewers than those that were telling the truth. However, there are

many cited examples of cues to deceit, delivered via nonverbal (Para verbal and visual)

communication channels, through which deceivers supposedly unwittingly provide clues

to their concealed knowledge or actual opinions. Most studies examining the nonverbal

cues to deceit rely upon human coding of video footage (c.f. Vrij, 2008), although a

recent study also demonstrated bodily movement differences between truth-tellers and

liars using an automated body motion capture system.

Deciphering IPA 2013

Chapter seven/non-verbal communication 101

Nonverbal actions

According to Matsumoto and Juang, the nonverbal motions that different people indicate

important channels of communication. The author states that nonverbal communication

is very important to be aware of, especially if comparing gestures, gaze, and tone of

voice amongst different cultures. As Latin American cultures embrace big speech

gestures, Middle Eastern cultures are relatively more modest in public and are not

expressive. Within cultures, different rules are made about staring or gazing. In some

cultures, gaze can be seen as a sign of respect. Voice is a category that changes within

cultures. Depending on whether or not the cultures is expressive or non expressive,

many variants of the voice can depict different reactions.

Proxemics

Proxemics is the study of how people use and perceive the physical space around them. The space between the sender and the receiver of a message influences the way the message is interpreted. In addition, the perception and use of space varies significantly across cultures[41] and different settings within cultures. Space in nonverbal communication may be divided into four main categories: intimate, social, personal, and public space.

Kinesics

The term "kinesics" was first used (in 1952) by Ray Birdwhistell, an anthropologist who wished to study how people communicate through posture, gesture, stance, and movement. Part of Birdwhistell's work involved making films of people in social situations and analyzing them to show different levels of communication not clearly seen otherwise. Several other anthropologists, including Margaret Mead and Gregory Bateson, also studied kinesics.

Deciphering IPA 2013

Chapter seven/non-verbal communication 102

7.2

Haptic: touching in communication

A high five is an example of communicative touch.

Haptic is the study of touching as nonverbal communication and haptic communication refers to how people and other animals communicate via touching. Touches among humans that can be defined as communication include handshakes, holding hands, kissing (cheek, lips, hand), back slapping, high fives, a pat on the shoulder, and brushing an arm. Touching of oneself may include licking, picking, holding, and scratching.[4] These behaviors are referred to as "adapters" or "tells" and may send messages that reveal the intentions or feelings of a communicator and a listener. The meaning conveyed from touch is highly dependent upon the culture, the context of the situation, the relationship between communicators, and the manner of touch.[47]

Touch is an extremely important sense for humans; as well as providing information about surfaces and textures it is a component of nonverbal communication in interpersonal relationships, and vital in conveying physical intimacy. It can be both sexual (such as kissing) and platonic (such as hugging or tickling).

Touch is the earliest sense to develop in the fetus. Human babies have been observed to have enormous difficulty surviving if they do not possess a sense of touch, even if they retain sight and hearing. Babies who can perceive through touch, even without sight and hearing, tend to fare much better.

In chimpanzees the sense of touch is highly developed. As newborns they see and hear poorly but cling strongly to their mothers. Harry Harlow conducted a controversial study involving rhesus monkeys and observed that monkeys reared with a "terry cloth mother," a wire feeding apparatus wrapped in soft terry cloth that provided a level of tactile stimulation and comfort, were considerably more emotionally stable as adults than those with a mere wire mother.(Harlow,1958)

Touching is treated differently from one country to another and socially acceptable levels of touching vary from one culture to another (Remland, 2009). In Thai culture, for example, touching someone's head may be thought rude. Remland and Jones (1995) studied groups of people communicating and found that touching was rare among the English (8%), the French (5%) and the Dutch (4%) compared to Italians (14%) and Greeks (12.5%). Striking, pushing, pulling, pinching, kicking, strangling and hand-to-hand fighting are forms of touch in the context of physical abuse.

Deciphering IPA 2013

Chapter seven/non-verbal communication 103

7.3

Interpreting non-verbal elements: neurophysiologic aspects:

What is the biological explanation for the process of interpreting non-verbal elements,

for the interpretation of gestures or other signs, as well as for the supposition that this

process happens even before the act of interpreting utterances? It has been proved that

emotional areas of the brain fire up even as cognitive areas fire up when discussing

emotional topics. That means that the emotional areas of an interpreter‘s brain will work

as well as the cognitive areas, when the interpreter listens to an angry speaker who

raises his voice. Of course, ―good‖ interpreters should not show emotions and must

remain impartial. However, it is clear that they are not machines, but human beings who

also feel emotions and who can detect those emotions which are embedded in non-

verbal communication.

Deciphering IPA 2013

Chapter seven/non-verbal communication 104

How do human beings feel emotions? What are the neurophysiologic conditions

for this process? The right hemisphere of the brain is also called the emotional brain, or

limbic system. It is the oldest part of the human‘s brain, the size of a walnut. The

prefrontal cortex is a part of the neocortex, the so-called thinking brain. The neocortex is

responsible for analytic processes, comparisons or considerations, for problem-solving,

planning, organisation and rational thought. It also processes emotionally relevant

stimuli. Both prefrontal cortex and neocortex developed during the process of human

evolution and are therefore younger than the limbic system. The prefrontal cortex as

well as the neocortex interacts with the evolutionary older limbic system. Part of the

limbic system is called the amygdale.

The process of understanding non-verbal and verbal elements can be described as

follows: Neural pathways bring information to the brain through the senses. Information

entering through eyes or ears goes first to the thalamus, to the large part of the limbic

system. The thalamus could be compared to a mail sorter. It decides to which parts of

the brain to send the information. If the incoming information is emotional, the thalamus

sends out two signals – the first to the amygdale and the second to the neocortex. As a

result, the emotional brain, the limbic system, receives the information first. For the

biological aspect of a species‘ survival, here human beings, this point is very important:

hypothetically, in the event of a crisis (attack of a wild animal, confrontation with an

enemy etc.) the interpreter could react (flee or fight) before the thinking brain has even

received the information and had a chance to weigh the options. Today, the interpreter

must not fear to be confronted with such dangerous attacks or confrontations. Today‘s

interpreter can generally react in a ―cooler‖ way than his or her ancestors. The

amygdale and the rest of the limbic system is a remnant of times when emotions like

anger or anxiety were much more useful to the survival of the species than nowadays.

However, today, interpreters can be confronted with an angry speaker, on whom more

or less violent body gestures can be observed (for example when he or she bangs a fist

on the table). In consecutive interpretation, thanks to the limbic system, the interpreter

experiences the anger first, can then analyze it, and express the message with less

violent body gestures, but, with a severe tone of voice.

Deciphering IPA 2013

Chapter seven/non-verbal communication 105

Non-verbal communication is not only crucial in a plain daily communication

situation but also for the interpreter. Non-verbal communication can take various forms,

each of which illustrates or replaces a certain part of the verbal communication. It

includes many more elements than one might think at first.

When interpreters are in a working situation where the audience will not see them, non-

verbal communication can represent a problem. The audience might even be tempted

to believe that the interpreters have not done a good job.

In order to be able to work properly, interpreters need to make sense of non-verbal

cues. This is only possible because a special part of our brain deals with the emotional

part of the message. Not only intelligence but also emotional intelligence is needed for

interpreting non-verbal elements.

Whether non-verbal communication supports the interpreters in their task or presents a

difficulty, it will always play an important role.

Deciphering IPA 2013

Chapter seven/non-verbal communication 106

7.4

Competencies:

1. Get a ¼ illustration board and make a collage of the different symbols and signs

that u know.

2. Pic a song and perform it using hand and body gestures or symbols.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 108

Overview:

When we communicate and socialize, we make use of words and phrases,

combining them to be able to formulate a sentence, these words and phrases used to

talk have different effects in our mouth. Our mouth changes positions as we say word

for word. Each word is pronounced depending on the letters it has in it.

There are different classifications of letters depending on their sounds. There are some

that we call the Bilabial sounds, The Liquids, The Interdental sounds, etc.

It is important for us to know these things because if we are able to master this, we

would have a better intonation, diction, pronunciation and sentence construction.

This chapter will introduce you to the basics of the International Phonetic Alphabet

which is used by the international association. It is composed of different alphabets

representing a particular sound.

You would be able to fully understand IPA, its framework and how it is used and done.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 109

8.1

International Phonetic Alphabet

The International Phonetic Alphabet (IPA)[note 1] is an alphabetic system of phonetic notation based primarily on the Latin alphabet. It was devised by the International Phonetic Association as a standardized representation of the sounds of oral language.[1] The IPA is used by lexicographers, foreign language students and teachers, linguists, speech-language pathologists, singers, actors, constructed language creators, and translators.[2][3]

The IPA is designed to represent only those qualities of speech that are distinctive in oral language: phonemes, intonation, and the separation of words and syllables.[1] To represent additional qualities of speech, such as tooth gnashing, lisping, and sounds made with a cleft palate, an extended set of symbols called the Extensions to the IPA may be used.[2]

IPA symbols are composed of one or more elements of two basic types, letters and

diacritics. For example, the sound of the English letter ⟨t⟩ may be transcribed in IPA with a single letter, [t], or with a letter plus diacritics, depending on how precise one wishes to be. Often, slashes are used to signal broad or phonemic transcription; thus, /t/ is less specific than, and could refer to, either (Th) or [t] depending on the context and language.

Occasionally letters or diacritics are added, removed, or modified by the International

Phonetic Association. As of the most recent change in 2005,[4] there are 107 letters, 52

diacritics, and four prosodic marks in the IPA.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 110

History

In 1886, a group of French and British language teachers, led by the French linguist Paul Passy, formed what would come to be known from 1897 onwards as the International Phonetic Association (in French, l’Association phonétique internationale). Their original alphabet was based on a spelling reform for English known as the Romic alphabet, but in order to make it usable for other languages, the values of the symbols were allowed to vary from language to language. For example, the sound [ʃ ] (the sh in

shoe) was originally represented with the letter ⟨c⟩ in English, but with the letter ⟨ch⟩ in French.[6] However, in 1888, the alphabet was revised so as to be uniform across languages, thus providing the base for all future revisions.

Since its creation, the IPA has undergone a number of revisions. After major revisions and expansions in 1900 and 1932, the IPA remained unchanged until the IPA Kiel Convention in 1989. A minor revision took place in 1993 with the addition of four letters for mid-central vowels and the removal of letters for voiceless implosives. The alphabet was last revised in May 2005 with the addition of a letter for a labiodental flap. Apart from the addition and removal of symbols, changes to the IPA have consisted largely in renaming symbols and categories and in modifying typefaces.[2]

Extensions to the IPA for speech pathology were created in 1990 and officially adopted

by the International Clinical Phonetics and Linguistics Association in 1994.

Among the symbols of the IPA, 107 letters represent consonants and vowels, 31

diacritics are used to modify these, and 19 additional signs indicate suprasegmental

qualities such as length, tone, stress, and intonation.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 111

Letterforms

The letters chosen for the IPA are meant to harmonize with the Latin alphabet. For this reason, most letters are either Latin or Greek, or modifications thereof. Some

letters are neither: for example, the letter denoting the glottal stop, ⟨ʔ ⟩, has the form of a dotless question mark, and derives originally from an apostrophe. A few letters, such as

that of the voiced pharyngeal fricative, ⟨ʕ ⟩, were inspired by other writing systems (in this case, the Arabic letter ‘ain).[9]

Despite its preference for harmonizing with the Latin script, the International Phonetic Association has occasionally admitted other letters. For example, before 1989, the IPA

letters for click consonants were ⟨ʘ⟩, ⟨ʇ ⟩, ⟨ʗ ⟩, and ⟨ʖ ⟩, all of which were derived either

from existing IPA letters, or from Latin and Greek letters. However, except for ⟨ʘ⟩, none of these letters were widely used among Khoisanists or Bantuists, and as a result they

were replaced by the more widespread symbols ⟨ʘ⟩, ⟨ǀ ⟩, ⟨ǃ ⟩, ⟨ǂ ⟩, and ⟨ǁ ⟩ at the IPA Kiel Convention in 1989.[13] Although the IPA diacritics are fully featural, there is little systemicity in the letter forms. A retroflex articulation is consistently indicated with a

right-swinging tail, as in ⟨ɕ ʂ ɦ ⟩, and implosion by a top hook, ⟨ɓ ɖ ɠ ⟩, but other pseudo-featural elements are due to haphazard derivation and coincidence. For

example, all nasal consonants but uvular ⟨ɧ ⟩ are based on the form ⟨n⟩: ⟨m ɥ n ɲ ɦ ŋ⟩. However, the similarity between ⟨m⟩ and ⟨n⟩ is a historical accident, ⟨ɲ ⟩ and ⟨ŋ⟩ are derived from ligatures of gn and ng, and ⟨ɥ⟩ is an ad hoc imitation of ⟨ŋ⟩. In none of these is the form consistent with other letters that share these places of articulation.

Some of the new letters were ordinary Latin letters turned upside-down, such as ɐ ɔ ə ə ɝ ɯ ɫ ᴚ ʇ ʌ ʍ ʎ (turned a c e f h m r ʀ t v w y). This was easily done with mechanical typesetting machines, and had the advantage of not requiring the casting of special type for IPA symbols.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 112

Symbols and sounds

The International Phonetic Alphabet is based on the Latin alphabet, using as few non-Latin forms as possible. The Association created the IPA so that the sound values of most consonant letters taken from the Latin alphabet would correspond to

"international usage". Hence, the letters ⟨b⟩, ⟨d⟩, ⟨f⟩, (hard) ⟨ɚ ⟩, (non-silent) ⟨h⟩, (unaspirated) ⟨k⟩, ⟨l⟩, ⟨m⟩, ⟨n⟩, (unaspirated) ⟨p⟩, (voiceless) ⟨s⟩, (unaspirated) ⟨t⟩, ⟨v⟩, ⟨w⟩, and ⟨z⟩ have the values used in English; and the vowel letters from the Latin alphabet (⟨a⟩, ⟨e⟩, ⟨i⟩, ⟨o⟩, ⟨u⟩) correspond to the (long) sound values of Latin: [i] is like the vowel in machine, [u] is as in rule, etc. Other letters may differ from English, but are

used with these values in other European languages, such as ⟨j⟩, ⟨r⟩, and ⟨y⟩.

This inventory was extended by using capital or cursive forms, diacritics, and rotation. There are also several symbols derived or taken from the Greek alphabet, though the

sound values may differ. For example, ⟨ʋ ⟩ is a vowel in Greek, but an only indirectly related consonant in the IPA. For most of these subtly different glyph shapes have been

devised for IPA, in particular ⟨ɑ ⟩, ⟨ɣ ⟩, ⟨ɛ ⟩, ⟨ɪ⟩, and ⟨ʋ ⟩ which are encoded in Unicode separately from their Greek "parent" letters, three of these (⟨β⟩, ⟨θ⟩ and ⟨χ⟩) are often used unmodified in form as they have not been encoded separately.

The sound values of modified Latin letters can often be derived from those of the original letters. For example, letters with a rightward-facing hook at the bottom represent retroflex consonants; and small capital letters usually represent uvular consonants. Apart from the fact that certain kinds of modification to the shape of a letter generally correspond to certain kinds of modification to the sound represented, there is no way to deduce the sound represented by a symbol from its shape (unlike, for example, in Visible Speech).

Beyond the letters themselves, there are a variety of secondary symbols which aid in transcription. Diacritic marks can be combined with IPA letters to transcribe modified phonetic values or secondary articulations. There are also special symbols for suprasegmental features such as stress and tone that are often employed.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 113

Brackets and phonemes

There are two principal types of brackets used to set off IPA transcriptions:

[square brackets] are used for phonetic details of the pronunciation, possibly including details that may not be used for distinguishing words in the language being transcribed, but which the author nonetheless wishes to document.

/slashes/ are used to mark off phonemes, all of which are distinctive in the language, without any extraneous detail.

For example, while the /p/ sounds of pin and spin are pronounced slightly differently in English (and this difference would be meaningful in some languages), the difference is not meaningful in English. Thus phonemically the words are /pɠ n/ and /spɠ n/, with the same /p/ phoneme. However, to capture the difference between them (the allophones of /p/), they can be transcribed phonetically as [pʰ ɠ n] and [spɠ n].

Other conventions are less commonly seen:

Double slashes //...//, pipes |...|, double pipes ||...||, or braces {...} may be used around a word to denote its underlying structure, more abstract even than that of phonemes. See morph phonology for examples.

Angle brackets are used to clarify that the letters represent the original orthography of the language, or sometimes an exact transliteration of a non-Latin script, not the IPA; or, within the IPA, that the letters themselves are indicated,

not the sound values that they carry. For example, ⟨pin⟩ and ⟨spin⟩ would be seen for those words, which do not contain the ee sound [i] of the IPA letter ⟨i⟩. Italics are perhaps more commonly used for this purpose when full words are being written (as pin, spin above), but may not be sufficiently clear for individual

letters and digraphs. The true angle brackets ⟨...⟩ (U+27E8, U+27E9) are not supported by many non-mathematical fonts as of 2010. Therefore chevrons ‹...› (U+2039, U+203A) are sometimes used in substitution, as are the less-than and greater-than signs <...> (U+003C, U+003E).

{Braces} are used for prosodic notation. See Extensions to the International Phonetic Alphabet for examples in that system.

(Parentheses) are used for indistinguishable utterances. They are also seen for silent articulation (mouthing), where the expected phonetic transcription is derived from lip-reading, and with periods to indicate silent pauses, for example (...).

Double parentheses indicate obscured or unintelligible sound, as in ((2 syll.)), two audible but unidentifiable syllables.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 114

Standard orthographies and capital variants

IPA letters have been incorporated into the alphabets of various languages, notably via the Africa Alphabet in sub-Saharan Africa: Hausa, Fula, Akan, Gbe languages, Manding languages, Lingala, etc. This has created the need for capital variants. For example, Kabiyé of northern Togo has Ɔ ɔ , Ɛ ɛ , Ɖ ɕ , Ŋ ŋ, Ɣ ɣ , Ʃ ʃ , Ʊ ʊ (or Ʋ ʋ ):

MBƱ AJƐ YA KIGBƐNDƱƱ ŊGBƐ YƐ KEDIƔZAƔ SƆSƆ Ɔ TƆM SE.

These, and others, are supported by Unicode, but appear in Latin ranges other than the IPA extensions.

In the IPA itself, only lower-case letters are used. The 1949 edition of the IPA handbook

indicated that an asterisk ⟨*⟩ may be prefixed to indicate that a word is a proper name, but this convention has not been included in recent editions.

Letters

The International Phonetic Association organizes the letters of the IPA into three categories: pulmonic consonants, non-pulmonic consonants, and vowels.

Pulmonic consonant letters are arranged singly or in pairs of voiceless (tenuis) and voiced sounds, with these then grouped in columns from front (labial) sounds on the left to back (glottal) sounds on the right. In official publications by the IPA, two columns are omitted to save space, with the letters listed among 'other symbols', and with the remaining consonants arranged in rows from full closure (occlusives: stops and nasals), to brief closure (vibrants: trills and taps), to partial closure (fricatives) and minimal closure (approximants), again with a row left out to save space. In the table below, a slightly different arrangement is made: All pulmonic consonants are included in the pulmonic-consonant table, and the vibrants and laterals are separated out so that the rows reflect the common lenition pathway of stop → fricative → approximant, as well as the fact that several letters pull double duty as both fricative and approximant; affricates may be created by joining stops and fricatives from adjacent cells. Shaded cells are judged to be implausible.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 115

Vowel letters are also grouped in pairs—of unrounded and rounded vowel sounds—with these pairs also arranged from front on the left to back on the right, and from maximal closure at top to minimal closure at bottom. No vowel letters are omitted from the chart, though in the past some of the mid central vowels were listed among the 'other symbols'.

Each character is assigned a number, to prevent confusion between similar letters (such as ɵ and θ, ɜ and ɣ , or ʃ and ʃ ) in such situations as the printing of manuscripts. The categories of sounds are assigned different ranges of numbers.

Other languages

The IPA is also not universal among dictionaries in languages other than English. Monolingual dictionaries of languages with generally phonemic orthographies generally do not bother with indicating the pronunciation of most words, and tend to use respelling systems for words with unexpected pronunciations. Dictionaries produced in Israel use the IPA rarely and sometimes use the Hebrew script for transcription of foreign words. Monolingual Hebrew dictionaries use pronunciation respelling for words with unusual spelling; for example, Even-Shoshan Dictionary respells as because this word uses kamatz katan. Bilingual dictionaries that translate from foreign languages into Russian usually employ the IPA, but monolingual Russian dictionaries occasionally use pronunciation respelling for foreign words; for example, Ozhegov's dictionary in brackets for the French word пенсне (pince-nez) to indicate that the е does not iotate the н.

The IPA is more common in bilingual dictionaries, but there are exceptions here too. Mass-market bilingual Czech dictionaries, for instance, tend to use the IPA only for sounds not found in the Czech language.

Linguists

Although IPA is popular for transcription by linguists, American linguists often alternate use of the IPA with Americanist phonetic notation or use the IPA together with some nonstandard symbols, for reasons including reducing the error rate on reading handwritten transcriptions or avoiding perceived awkwardness of IPA in some situations. The exact practice may vary somewhat between languages and even individual researchers, so authors are generally encouraged to include a chart or other explanation of their choices.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 116

8.2

Consonants

v

t

e

IPA pulmonic consonantschart image • audio

Place → Labial Coronal Dorsal Radical Glott

al

↓ Manner

Bila-bial

Labio-

dental

Dental

Alveolar

Post-alveol

ar

Retroflex

Alveolo-

palatal

Palatal

Velar Uvular

Pha

ryngeal

Epiglottal

Glottal

Nasal

m

ɥ

n

ɦ

ɲ ŋ

ɧ

Stop p b

t d

ʈ ɕ

c ə k ɚ q

ɛ

ʡ ʔ

Sibilant fricative

s z ʃ ʍ ʂ ʐ ɓ ʑ

Non-sibilant fricative

ɪ

β

f v θ

ð

ç ʝ x ɣ χ

ʁ

ħ

ʕ

ʜ ʢ h ɞ

Approximant

ʋ

ɫ

ɭ

j

ɤ

Flap or tap

ɾ

ɽ

Trill

ʙ

r

r ʀ

ᴙ  *

Lateral fricative

ɡ ɣ

ⱱ  *

Lateral approximant

l

ɢ

ʎ

ʟ

Lateral flap

ɬ

*

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 117

— These tables contain phonetic symbols, which may not display correctly in some browsers. [Help]

— Where symbols appear in pairs, left–right represent the voiceless–voiced consonants.

— Shaded areas denote pulmonic articulations judged to be impossible.

— Symbols marked with an asterisk (*) are not defined in the IPA.

Notes

Asterisks (*) indicate unofficial IPA symbols for attested sounds. See the respective articles for

ad hoc symbols found in the literature. In rows where some letters appear in pairs (the obstruents), the letter to the right

represents a voiced consonant (except breathy-voiced [ɞ ]). However, [ʔ ] cannot be voiced, and the voicing of [ʡ ] is ambiguous.[37] In the other rows (the sonorants), the single letter represents a voiced consonant.

Although there is a single letter for the coronal places of articulation for all consonants but fricatives, when dealing with a particular language, the letters may be treated as specifically dental, alveolar, or post-alveolar, as appropriate for that language, without diacritics.

Shaded areas indicate articulations judged to be impossible. The letters [ʁ , ʕ , ʢ ] represent either voiced fricatives or approximants. In many languages, such as English, [h] and [ɞ ] are not actually glottal, fricatives,

or approximants. Rather, they are bare phonation.[38] It is primarily the shape of the tongue rather than its position that distinguishes

the fricatives [ʃ ʍ ], [ɓ ʑ ], and [ʂ ʐ ].

Non-pulmonic consonants

Clicks

ʘ ǀ ǃ ǂ ǁ

ˀ q qʼ ‼

Implosives ɓ

ɖ ᶑ ʃ ɠ ʛ

Ejectives

pʼ ʼ tʼ ʈ ʼ cʼ kʼ qʼ fʼ θʼ

sʼ ɡ ʼ ʃ ʼ ʂ ʼ ɓ ʼ xʼ χʼ tsʼ tɡ ʼ

tʃ ʼ ʈ ʂ ʼ kxʼ qχʼ

Affricates

f v ts dz tθ dð tʃ dʍ

tɓ dʑ ʈ ʂ ɕ ʐ tɡ dɣ cç ə ʝ

kx ɚ ɣ

qχ ɛ ʁ

Co-articulated consonants

Continuants ʍ w ɝ ɟ

Occlusives p b m

The labiodental nasal [ɥ] is not known to exist as a phoneme in any language.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 118

Pulmonic consonants

A pulmonic consonant is a consonant made by obstructing the glottis (the space between the vocal cords) or oral cavity (the mouth) and either simultaneously or subsequently letting out air from the lungs. Pulmonic consonants make up the majority of consonants in the IPA, as well as in human language. All consonants in the English language fall into this category.

The pulmonic consonant table, which includes most consonants, is arranged in rows that designate manner of articulation, meaning how the consonant is produced, and columns that designate place of articulation, meaning where in the vocal tract the consonant is produced. The main chart includes only consonants with a single place of articulation.

Co-articulated consonant

Co-articulated consonants are sounds that involve two simultaneous places of articulation (are pronounced using two parts of the vocal tract). In English, the [w] in "went" is a coarticulated consonant, because it is pronounced by rounding the lips and raising the back of the tongue. Other languages, such as French and Swedish, have different coarticulated consonants.

Note

[ɟ ] is described as a "simultaneous [ʃ ] and [x]".[41] However, this analysis is disputed. (See voiceless palatal-velar fricative for discussion.)

Affricates and double articulated consonants

Affricates and doubly articulated stops are represented by two letters joined by a tie bar, either above or below the letters. The six most common affricates are optionally represented by ligatures, though this is no longer official IPA usage, because a great number of ligatures would be required to represent all affricates thi

ʍ or similar affricates, even in official IPA publications, so they must be interpreted with care.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 119

Tie bar Ligature Description

s ʦ voiceless alveolar affricate

z ʣ voiced alveolar affricate

ʃ ʧ voiceless postalveolar affricate

ʍ ʤ voiced postalveolar affricate

ɓ ʨ voiceless alveolo-palatal affricate

ʑ ʥ voiced alveolo-palatal affricate

ɡ – voiceless alveolar lateral affricate

p – voiceless labial-velar plosive

b – voiced labial-velar plosive

m – labial-velar nasal stop

– voiced velar affricate

Non-pulmonic consonants

Non-pulmonic consonants are sounds whose airflow is not dependent on the lungs. These include clicks (found in the Khoisan languages of Africa), implosives (found in languages such as Swahili or Vietnamese), and ejectives (found in many Amerindian and Caucasian languages).

View this table as an image

Clicks Implosives Ejectives

ʘ Bilabial ɓ Bilabial ʼ For example:

ǀ Laminal alveolar ("dental") ɖ Alveolar pʼ Bilabial

ǃ Apical (post-) alveolar ("retroflex") ʃ Palatal tʼ Alveolar

ǂ Laminal postalveolar ("palatal") ɠ Velar kʼ Velar

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 120

Clicks are double articulated and have traditionally been described as having a forward 'release' and a rear 'accompaniment', with the click letters representing the release. Therefore all clicks would require two letters for proper notation: ⟨ ⟩ etc., or ⟨

⟩. When the dorsal articulation is omitted, a [k] may usually be assumed. However, recent research disputes the concept of 'accompaniment'.[43] In these approaches, the click letter represents both articulations, with the different letters representing different click 'types', there is no velar-uvular distinction, and the accompanying letter represents the manner, phonation, or airstream contour of the click: ⟨ǂ , ᶢ ǂ , ᵑ ǂ ⟩ etc.

Letters for the voiceless implosives ⟨ƥ , ƭ , ƈ , ƙ , ʠ ⟩ are no longer supported by the IPA, though they remain in Unicode. Instead, the IPA typically uses the voiced equivalent with a voiceless diacritic: ⟨ ⟩, etc..

Although not confirmed as contrastive in any language, and therefore not

explicitly recognized by the IPA, a letter for the retroflex implosive, ⟨ᶑ ⟩, is supported in the Unicode Phonetic Extensions Supplement, added in version 4.1 of the Unicode Standard, or can be created as a composite ⟨ ⟩.

The ejective diacritic often stands in for a superscript glottal stop in glottalized but pulmonic sonorants, such as [mˀ ], [lˀ ], [wˀ ], [aˀ ]. These may also be transcribed as ].

ǁ Lateral coronal ("lateral") ʛ Uvular sʼ Alveolar fricative

Tongue positions of cardinal front vowels with highest point indicated. The position of the highest point is used to determine vowel height and backness.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 121

8.3

Vowels

Close

•iy

•ɨ ʉ •ɯu •ɠ ʏ

Near-

close

Close-

mid

Mid

Open-

mid

Near-

open

Open

ʊ •eø •ɔ ɵ •ɜ o

• ə

• •ɛ œ •ɖ ɘ •ʌ ɔ æ ɐ

•aɨ ä

•ɑ ɒ

Paired vowels are: unrounded • rounded

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 122

Diphthongs

Diphthongs are typically specified with a non-syllabic diacritic, as in ⟨ ⟩. However, sometimes a tie bar is used, especially if it is difficult to tell if the vowel is characterized by an on-glide or an off-glide: ⟨ ɠ ⟩ or ⟨ e⟩.

Diacritics

Diacritics are small markings which are placed around the IPA letter in order to show a certain alteration or more specific description in the letter's pronunciation. Subdiacritics (markings normally placed below a letter) may be placed above a letter having a descender .

The dotless i, ⟨ı⟩, is used when the dot would interfere with the diacritic. Other IPA letters may appear as diacritic variants to represent phonetic detail: tˢ (fricative release), bʱ (breathy voice), ˀ a (glottal onset), ᵊ (epenthetic schwa), oᶷ (diphthongization). Additional diacritics were introduced in the Extensions to the IPA, which were designed principally for speech pathology.

The IPA defines a vowel as a sound which occurs at a syllable center.[44] Below is a chart depicting the vowels of the IPA. The IPA maps the vowels according to the position of the tongue.

The vertical axis of the chart is mapped by vowel height. Vowels pronounced with the tongue lowered are at the bottom, and vowels pronounced with the tongue raised are at the top. For example, [ɑ ] (the first vowel in father) is at the bottom because the tongue is lowered in this position. However, [i] (the vowel in "meet") is at the top because the sound is said with the tongue raised to the roof of the mouth.

In a similar fashion, the horizontal axis of the chart is determined by vowel backness. Vowels with the tongue moved towards the front of the mouth (such as [ɛ ], the vowel in "met") are to the left in the chart, while those in which it is moved to the back (such as [ʌ ], the vowel in "but") are placed to the right in the chart.

In places where vowels are paired, the right represents a rounded vowel (in which the lips are rounded) while the left is its unrounded counterpart

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 123

Syllabicity diacritics

Syllabic Non-syllabic

Consonant-release diacritics

◌ʰ tʰ Aspirated[a]

No audible release

◌ʱ dʱ

◌ⁿ dⁿ Nasal release ◌ˡ dˡ Lateral release

Phonation diacritics

Voiceless Voiced

Breathy voiced[b]

Creaky voiced

Articulation diacritics

Dental Linguolabial

Apical Laminal

Advanced Retracted

ë ä Centralized Mid-centralized

Raised = voiced alveolar nonsibilant fricative)

◌˔ ˔

Lowered = bilabial approximant)

◌˕ ˕

Co-articulation diacritics

More rounded ʷ Less rounded

◌ʷ tʷ dʷ Labialized or labio-velarized ◌ʲ tʲ dʲ Palatalized

◌ˠ tˠ dˠ Velarized ◌ˤ tˤ aˤ Pharyngealized

◌ᶣ tᶣ dᶣ Labio-palatalized ɫ Velarized or pharyngealized

Advanced tongue root Retracted tongue root

Nasalized ◌˞ ɕ ɗ Rhotacized

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 124

Open glottis [t] voiceless

] breathy voice, also called murmured

] slack voice

Sweet spot [d] modal voice

] stiff voice

] creaky voice

Closed glottis t] glottal closure

Suprasegmental

These symbols describe the features of a language above the level of individual consonants and vowels, such as prosody, tone, length, and stress, which often operate on syllables, words, or phrases: that is, elements such as the intensity, pitch, and gemination of the sounds of a language, as well as the rhythm and intonation of speech.[46] Although most of these symbols indicate distinctions that are phonemic at the word level, symbols also exist for intonation on a level greater than that of the word.[46]

View this table as an image

Length, stress, and rhythm

ˈ a Primary stress (symbol goes before stressed syllable)

ˌ a Secondary stress (symbol goes before stressed syllable)

aː kː Long (long vowel or geminate consonant)

aˑ Half-long

Extra-short

a.a Syllable break s‿a Linking (absence of a break)

Intonation

| Minor (foot) break ‖ Major (intonation) break

↗ [47] Global rise ↘ [47] Global fall

Tone diacritics and tone letters

Extra high / top ⱱke Upstep

High ě Generic rise

ē Mid

Low ê Generic fall

ȅ Extra low / bottom ⱱke Downstep

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 125

Obsolete and nonstandard symbols

The IPA inherited alternate symbols from various traditions, but eventually settled on

one for each sound. The other symbols are now considered obsolete. An example is ⟨ɩ⟩ which has been standardized to ⟨ʊ ⟩. Several letters indicating secondary articulation have been dropped altogether, with the idea that such things should be indicated with

diacritics: ⟨ƍ ⟩ for ⟨zʷ ⟩ is one. In addition, the rare voiceless implosive series ⟨ƥ ƭ ƈ ƙ ʠ ⟩ has been dropped; they are now written ⟨ ⟩ or ⟨pʼ ↓ tʼ ↓ cʼ ↓ kʼ ↓

qʼ ↓⟩. A rejected competing proposal for transcribing clicks, ⟨ʇ , ʗ , ʖ ⟩, is still sometimes seen, as the official letters ⟨ǀ , ǃ , ǁ ⟩ may cause problems with legibility, especially when used with brackets ([ ] or / /), the letter ⟨l⟩, or the prosodic marks ⟨|, ‖ ⟩ (for this reason, some publications which use standard IPA click letters disallow IPA brackets).[49]

There are also unsupported or ad hoc letters from local traditions that find their way into publications that otherwise use the standard IPA. This is especially common with affricates such as the "barred lambda" ⟨ƛ ⟩ ɡ ].

IPA extensions

The "Extensions to the IPA", often abbreviated as "extIPA", and sometimes called "Extended IPA", are symbols whose original purpose was to accurately transcribe disordered speech. At the IPA Kiel Convention in 1989, a group of linguists drew up the initial extensions,[50] which were based on the previous work of the PRDS (Phonetic Representation of Disordered Speech) Group in the early 1980s.[51] The extensions were first published in 1990, then modified, and published again in 1994 in the Journal of the International Phonetic Association, when they were officially adopted by the ICPLA.[52] While the original purpose was to transcribe disordered speech, linguists have used the extensions to designate a number of unique sounds within standard communication, such as hushing, gnashing teeth, and smacking lips. The extensions have also been used to record certain peculiarities in an individual's voice, such as nasalized voicing.[2]

The Extensions to the IPA do not include symbols used for voice quality (VoQS), such as whispering.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 126

Vowels

-- ], and the rounded equiva

] and [ä] are near-close and open central vowels, respectively. The only known vowels that cannot be represented in this scheme are vowels with unexpected roundedness, which would require a dedicated diacritic, such as ⟨ʏ ʷ ⟩ and ⟨uᵝ ⟩ (or ⟨ɠ ʷ ⟩ and ⟨ɯᵝ ⟩).

Symbol names

An IPA symbol is often distinguished from the sound it is intended to represent, since there is not necessarily a one-to-one correspondence between letter and sound in

broad transcription, making articulatory descriptions such as 'mid front rounded vowel' or 'voiced velar stop' unreliable. While the Handbook of the International Phonetic Association states that no official names exist for its symbols, it admits the presence of one or two common names for each. The symbols also have nonce names in the Unicode standard. In some cases, the Unicode names and the IPA names do not agree. For example, IPA calls ɛ "epsilon", but Unicode calls it "small letter open E".

The traditional names of the Latin and Greek letters are usually used for unmodified letters. Letters which are not directly derived from these alphabets, such as [ʕ ], may have a variety of names, sometimes based on the appearance of the symbol, and sometimes based on the sound that it represents. In Unicode, some of the letters of Greek origin have Latin forms for use in IPA; the others use the letters from the Greek section.

For diacritics, there are two methods of naming. For traditional diacritics, the IPA notes the name in a well known language; for example, é is acute, based on the name of the diacritic in English and French. Non-traditional diacritics are often named after objects they resemble, so is called bridge.

Pullum and Ladusaw list a variety of names in use for IPA symbols, both current and retired, in addition to names of many other non-IPA phonetic symbols. Their collection is extensive enough that the Unicode Consortium used it in the development of Unicode.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 127

ASCII and keyboard transliterations

Several systems have been developed that map the IPA symbols to ASCII characters. Notable systems include Kirshenbaum, Arpabet, SAMPA, and X-SAMPA. The usage of mapping systems in on-line text has to some extent been adopted in the context input methods, allowing convenient keying of IPA characters that would be otherwise unavailable on standard keyboard layouts.

Deciphering IPA 2013

Chapter eight / International Phonetic Alphabet 128

8.4

Competencies:

1. Make a chart of the words of the following type of articulation:

Aspirated

Unaspirated

Voiced

Bilabial

Interdental 2. Rewrite the following words using the IPA letters:

Dog Cat

Animal Ambulance Carnival Apricot Umbrella

Deciphering IPA 2013

i.

Teaching Strategies:

Gallery walk

Identification

Fill in the blanks

Charts

Small Group Discussion

Illustrations

Essay writing

Explanations

Drawings

Transcription

Entertainment

Role play

Comments for Improvement:

Making a module is not an easy task. There should be proper sequencing of data

especially on the topics that would be discussed in every chapter. The topic should flow

from the most general down to the more specific to attain a properly organized and

spontaneous module. The topics and discussions should not be redundant. The

discussions must clearly explain the topic that is highlighted in every chapter. The

competencies are the tricky part. It is really difficult to arrange the competencies in

every chapter, thus, it comes in handy if you plot teaching strategies that would be

injected to every chapters in your module and follow it in formulating the competencies.

When you‘ve started making it, it would be easy as you go along in every chapter.

ii..

References:

http://literary-articles.blogspot.com/2012/03/mechanism-of-speech-process-and.html.

http://www.studymode.com/essays/Eight-Parts-Of-Human-Speech-1082713.html.

http://www.bartleby.com/186/pages/page7.html.

http://www.doc.ic.ac.uk/~nd/surprise_95/journal/vol1/sm1/article1.html.

http://giftofgab-fluentenglish.blogspot.com/2009/08/speech-organs-how-important.html.

http://www.wisegeek.com/what-are-speech-organs.htm#didyouknowout.

http://wiki.answers.com/Q/What_are_the_speech_organs_and_their_functions.

https://en.wikipedia.org/wiki/Phonetics

http://introductiontolinguistics2009ii.wordpress.com/2010/02/25/phonetics-and-phonology/

http://www.phon.ox.ac.uk/jcoleman/PHONOLOGY1.htm

http://esl.fis.edu/grammar/langdiff/phono.htm

http://pandora.cii.wwu.edu/vajda/ling201/test2materials/Phonology1.htm

http://www.ask.com/question/similarities-between-phonetics-and-phonology

http://www.englishclub.com/grammar/parts-of-speech_1.htm

http://en.wikipedia.org/wiki/Diction

http://pandora.cii.wwu.edu/vajda/ling201/test2materials/articulatory_phonetics.htm

http://en.wikipedia.org/wiki/Place_of_articulation

http://en.wikipedia.org/wiki/Communication

http://www.ling.upenn.edu/courses/Fall_2013/ling115/phonetics.html

International Phonetic Alphabet-wikipedia.com

www.webcrawler.com

http://en.wikipedia.org/wiki/Nonverbal_communication

iii…

Credits:

First and foremost, I would like to thank the Heavenly Father for

giving me enough strength to finish this work. Secondly, I would like to

thank the people behind the success of this module, my parents, my sister,

my boyfriend and my classmates.

Lastly and most especially, I would like to extend my heartfelt

gratitude to my teacher who has been patient and understanding all

through out these days, Ms. Jo-anne Phoebe Lagarde.

Thank you so much for always being there even though everyone

has turned its back from us. Thank you so much for believing in us after all

those mistakes that was done. Thank you so much for being there.

More power and God Bless!

Larissa Cayobit