“Excellent results” of teaching listening at a Saudi University: Appearance and reality

12
Volume 2 Issue 1 June 2015 INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926 http://ijhcschiefeditor.wix.com/ijhcs Page 201 “Excellent results” of teaching listening at a Saudi University: Appearance and reality Syed Md Golam Faruk King Khalid University, Saudi Arabia Mohammad Reazur Rahman King Khalid University, Saudi Arabia Abstract The paper investigates the difference between the appearance and reality of “excellent results” that 49 university students obtained in their listening course which was designed to deal with the lowest learning domain of Revised Bloom’s Taxonomy (RBT)—“remember” (Anderson et al., 2001). The subjects of this study were the students from the department of Engineering who were studying English at their foundation course in King Khalid University, Saudi Arabia. The mean scores of the regular test based on “remember” given at the end of the semester were compared with those of another similar test based on “remember” and “understand” of RBT (Anderson et al., 2001) given after three weeks of the regular test. The statistical analysis of the results shows that the students’ memorization of some words and phrases help them identify the same words and phrases in new context but it does not help them answer “understand” questions. The students who did excellent in rememberquestions did miserably bad in understandquestions. The paper finds that the only reason the students can perform well on their final exams is that the listening texts they listen to are the carbon copies of what they listened and read (transcripts) throughout the semesterthe students are simply being asked to apply well rehearsed schemata for specific kinds of task. The paper also finds that this achievement is practically meaningless because it does not develop the students’ understanding and higher level cognitive skills like apply, analyze, evaluate, and create(Anderson et al., 2001). Keywords: Saudi Arabia, listening, listening test, Bloom's taxonomy

Transcript of “Excellent results” of teaching listening at a Saudi University: Appearance and reality

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 201

“Excellent results” of teaching listening at a Saudi University: Appearance

and reality

Syed Md Golam Faruk

King Khalid University, Saudi Arabia

Mohammad Reazur Rahman

King Khalid University, Saudi Arabia

Abstract

The paper investigates the difference between the appearance and reality of “excellent results”

that 49 university students obtained in their listening course which was designed to deal with the

lowest learning domain of Revised Bloom’s Taxonomy (RBT)—“remember” (Anderson et al.,

2001). The subjects of this study were the students from the department of Engineering who were

studying English at their foundation course in King Khalid University, Saudi Arabia. The mean

scores of the regular test based on “remember” given at the end of the semester were compared

with those of another similar test based on “remember” and “understand” of RBT (Anderson et

al., 2001) given after three weeks of the regular test. The statistical analysis of the results shows

that the students’ memorization of some words and phrases help them identify the same words

and phrases in new context but it does not help them answer “understand” questions. The

students who did excellent in “remember” questions did miserably bad in “understand”

questions. The paper finds that the only reason the students can perform well on their final

exams is that the listening texts they listen to are the carbon copies of what they listened and

read (transcripts) throughout the semester—the students are simply being asked to apply well

rehearsed schemata for specific kinds of task. The paper also finds that this achievement is

practically meaningless because it does not develop the students’ understanding and higher level

cognitive skills like “apply”, “analyze”, “evaluate”, and “create” (Anderson et al., 2001).

Keywords: Saudi Arabia, listening, listening test, Bloom's taxonomy

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 202

Introduction

Saudi education has two lineages—traditional and formal. The curriculum of traditional

Qur’anic school was intended to develop the learning domains of ―remember‖ and ―understand‖.

In this type of school, the key system of learning was memorization for two reasons: firstly,

memorization of the Holy Book is emphasized in Hadith (Al Bukhari n.d.: 93: 489), and

secondly, the transmission of the Qur’an from one generation to another could be attained only

orally in the past. On the other hand, formal education has been organized into two types of

schooling—the kuttab and madrassa (Tibi, 1998). Kuttab was the only type of formal education

in Saudi Arabia for many years where mainly religion, the Arabic language and basic arithmetic

were taught. In the 20th

century, although kuttab was replaced by the modern elementary school

(madrassa), but it still continues the legacy of the old syllabus and method of instruction (where

the teacher still acts like a preacher). Szyliowicz (1973) observes:

The following method of instruction prevailed in medieval Islam through [sic]

adaptations were [sic] made to meet the needs of different levels of instruction. Formal

delivery of lecture with the lecturer squatting on a platform against a pillar and one or

two circles of students seated before him was the prevailing method in higher levels of

instruction. The teacher read from a prepared manuscript or from a text, explaining the

material, and allowed questions and discussion to follow the lecture.

Baker (1997, p. 246) observes the same instruction method in Saudi Arabia where the

students are the poor third element in classroom after teacher and textbooks. Another study

(Goodlad, 1984) on Saudi secondary schools has similar findings. Goodlad finds that Saudi

textbooks are often a substitute for pedagogy and that teaching methods tend to be mechanical

and engaging, and that memorization and rote learning are preferred consistently over critical

thinking and creativity.

In a similar vein, in the 21st century, Elyas and Picard (2010, p. 138) observe that the

preacher-like teacher-centered Saudi classroom resembles the Halgah—a religious gathering at a

mosque where the imam preaches and the passive audience listens to him attentively and

exclusively. However, sometimes, the preacher-like powerful teachers provide some scopes for

interactions to the students with some strict parameters—the students are not allowed to make

enquiries on all the topics and assumptions (Jamjoon, 2009, pp. 7-8). Moreover, many other

scholars (Al Ghamdi, Amani, & Philline, 2013; Al-Essa, 2009; Al-Miziny, 2010; Elyas, 2008

and Al-Souk, 2009) agree that present Saudi education still revolves around teachers and

textbooks where the students are not urged to partake in the classroom activities, make

enquiries, and think critically and creatively.

For the straightjacket of this long-standing tradition, Saudi higher education is still

confined in memorization in spite of its obligation to follow National Qualifications Framework

for Higher Education in the Kingdom of Saudi Arabia (NQF) (National Qualifications, 2009)

which encourages critical thinking skills. In each and every level of education, Saudi students

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 203

memorize the ready-made answers to predictable questions and regurgitate them in exams.

Surprisingly, although the de jure Saudi education policies strongly advocate for teaching higher

level cognitive skills, the de facto classroom teaching and question patterns are tailored to cater

for the students’ habit of memorization.

King Khalid University (one of the biggest universities situated in Saudi Arabia’s

southern province—Assir) is no exception. The students are still stuck in the lowest learning

domain—―remember‖—and instead of weaning the students off the habit of memorization, the

teachers are providing them with probable questions and prepared answers.

In this context, the paper investigates a course offered to the first year Engineering

students by English Department of King Khalid University, KSA in order to check the validity of

the widely acknowledged view that Saudi students are using only ―memorization‖ as their

learning strategy and to find out the impact of this learning strategy on other higher level

learning strategies with three research questions:

1. What is the students’ preferred learning strategy in Listening course offered by the

Department of English in King Khalid University?

2. Do the teachers encourage the students develop other learning strategies?

3. Does the Listening course develop students’ other learning domains?

In this paper, learning domains and language learning strategies are discussed in terms of

NQF and Bloom’s Taxonomy (BT). In NQF (National Qualifications Framework, 2009), there

are five domains of learning outcomes—knowledge, cognitive skills, interpersonal skills and

responsibility, communication, information technology and numerical skills, and psychomotor

skills. According to the studies discussed above, presumably, no attempt is supposed to be taken

to develop the students’ ―cognitive skills‖ let alone the higher domains like ―interpersonal skills

and responsibility‖ or ―communication‖. Therefore, in the following sections, the whole

discussion will be limited to ―knowledge‖ and ―cognitive skills‖ (for detail see Table 1) which

are related to Revised Bloom’s Taxonomy (RBT) (see Table 1).

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 204

Table 1: The lowest two NQF domains and learning outcomes

NQF Domains Learning Outcomes

Knowledge Ability to recall, understand, and present information, including:

knowledge of specific facts, knowledge of concepts, principles and

theories, and knowledge of procedures.

Cognitive skills Ability to apply conceptual understanding of concepts, principles,

theories

Ability to apply procedures involved in critical thinking and creative

problem solving, both when asked to do so, and when faced with

unanticipated new situations

Ability to investigate issues and problems in a field of study using a

range of sources and draw valid conclusions.

NQF’s initial two learning domains—knowledge and cognitive skills—are in fact nothing

but rephrasing of RBT. NQF’s ―knowledge‖ incorporates ―remember‖ and ―understand‖ of RBT

which were named as ―knowledge‖ and ―comprehension‖ in original BT (Bloom et al., 1956).

The second domain in NQF’s hierarchy is ―cognitive skills‖ which corresponds to ―apply‖,

―analyze‖, ―evaluate‖, and ―create‖ of RBT termed as ―application‖, ―analysis‖, ―synthesis‖ and

―evaluation‖ in BT. (See Table 2)

Table 2: NQF and RBT

NQF domains RBT categories RBT subcategories

Knowledge Remember—Retrieving relevant

knowledge from long-term memory.

Recognizing,

Recalling

Understand—Determining the meaning of

instructional messages, including oral,

written, and graphic communication.

Interpreting,

Exemplifying,

Classifying,

Summarizing,

Inferring, Comparing,

Explaining

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 205

Methods

As one of the researchers has been teaching in KKU for more than seven years and

another for more than four years, they know that the participants, materials, teaching methods,

and test-patterns of this study are typical of listening courses offered here.

Participants

The participants of this research are 49 male Engineering students, aged 18-21, of King

Khalid University, Saudi Arabia. They were studying listening in Intensive English Program in

two sections taught by the same teacher at their first semester during September-December 2014.

Material

As this study is integrated into the students’ regular curriculum, the material used for the

regular test is taken from the prescribed textbook for listening Open Forum 1 (Blackwell and

Naber, 2006) and some other materials for the post-regular test from its website—

www.oup.com/elt/openforum.

Procedure

King Khalid University has well-equipped language labs where the listening classes of

Engineering students are conducted for three contact hours a week, and 42 contact hours in the

whole semester. In the whole semester, the students are taught 10 units from Open Forum 1 each

of which has two audio clips. In each class, the teacher starts with pre-listening activities and

then plays one audio clip thrice. During the whole class, the teacher helps the students with

meanings and definitions of the words and terms whenever they ask for them. In addition to that,

the students can also take help from the transcripts of the clips. In other words, the students have

all the opportunities to ―understand‖ the texts but they do not avail themselves of them as that

type of questions is not framed in the exam. At the end of the class, the students do the exercises

given in the textbooks. The students can listen to the clips and read the texts again at home as all

of them have the CDs and transcripts.

Tests

In this study, the marks obtained in regular final test are compared to those of a post-

regular test given after three weeks of the regular test. The question paper of the regular test

consists of six parts. In the first part (two marks), the students listen to a long conversation for

two and a half minutes and answer 10 multiple choice questions with four options. In the second

part (two marks), they listen to 10 short conversations of 30-40 seconds each, and for each

conversation they have to answer one multiple-choice question. In the third part (two and a half

marks), they listen to a conversation for one and a half minutes and identify 8 statements as true

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 206

or false. In the fourth part (two and a half marks), the students take a cloze test based on the

transcript of a text which they listen for one minute. But the difference between a regular cloze

test and this one is that the students are given two options to choose the correct answer in order

to fill in the blanks. In the fifth part (one and a half marks), they students take another cloze test

based on the transcript of a text which they listened for one minute. In the last part (two marks)

the students listen to four audio clips each of which was played for 8-12 seconds and then they

have to identify the correct meaning of each of the four words (one word from each text) from

two options. The total marks for the listening exam is twelve and a half. In this study, only the

marks obtained in parts one and two are used as data. The marks obtained in other parts are not

considered to be valid for statistical analysis because most of the students answer the questions

by randomly checking any one of the two given options.

It is to be noted here that in terms of RBT, regular test has ―remember‖ questions for two

marks and ―understand‖ questions for 0.25 marks. On the other hand, in post-regular test both

―remember‖ and ―understand‖ questions have two marks each. As there are very few

―understand‖ questions in regular test, those marks are not included in data analysis.

The researchers followed the regular test-pattern while designing the post-regular test.

Moreover, the vocabulary and content are also similar to those of the regular exam texts, if not

exactly the same. In addition to that, the researchers measured the difficulty levels of both the

regular and post-regular exam texts in terms of the variables like length, speed, familiarity,

information density, and text organization. The researchers used the Readability Statistics

software from Microsoft Word and found that the texts used in regular and post-regular tests are

of the same level (see Table 3).

Table 3: Comparison between the difficulty levels of the texts

Variables per

Text

Regular Test Post-regular Test

Long

conversation

Short

conversation

Long

conversation

Short

conversation

Words 407 80.6 375 77.4

Characters 1830 379 1684 370.5

Paragraphs 11 2.9 7 2.3

Sentences 33 7 34 7.4

Sentences per

paragraph

3.3 3.57 4.8 4.87

Words per

sentence

12.3 12.24 11 10.4

Characters per

word

4.3 4.48 4.3 4.62

Passive

sentences

6% 1.6% 5% 2%

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 207

Flesch Reading

Ease

74.6 69.6 71.8 65.52

Flesch-Kincaid

Grade Level

5.8 6.55 5.8 6.66

Words per

minute

162.4 163.03 153.06 151.76

The students took the post-regular test as seriously as they take the regular ones since

they were informed well ahead that the marks of the post-regular test would be added to their

final marks.

Data Analysis

Interrater reliability for marking the papers

Pearson Correlation Coefficient Test was not used to analyze the marks given by two

independent raters as the questions were objective. Two raters marked the papers in order to

avoid mistakes.

Statistical analysis

The participants’ scores were analyzed by using unpaired t test and post-hoc LSD (least

significant difference). Cohen’s d was also used to calculate the effect size.

Results

The descriptive statistics are presented in Tables 4 and 5.

Table 4: Means and standard deviations of regular and post-regular tests

Test Type Question Type RBT Category N M SD

Regular Test Long

Conversation

Remember 49 1.63 .45

Understand — — —

Short

Conversation

Remember 49 1.63 .36

Understand — — —

Total Remember 49 1.63 .37

Understand 49 — —

Post Regular

Test

Long

Conversation

Remember 34 1.06 .33

Understand 34 .69 .25

Short

Conversation

Remember 34 1.41 .37

Understand 34 .58 .24

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 208

Total Remember 34 1.23 .38

Understand 34 .63 .41

Table 5: Comparison of the mean scores of “remember” and “understand” of regular and

post regular tests

Conditions Mean

Difference

p t df Cohen’s d Effect

size

Regular (remember) vs.

post-regular

(remember) tests

.40 .0001** 4.79 81 1.06 .78

Regular (remember) vs.

post-regular

(understand) tests

1 .0001** 11.58 81 2.56 .78

*p < .05. **p < .01.

As can be seen in Table 3, in the regular test the students did very good in ―remember‖

questions obtaining 1.63 out of 2 (81.5% marks) on average. On the other hand, in post-regular

test the average marks in ―remember‖ questions is 1.23 out of 2 (61.5% marks) which is merely

the pass marks according to university regulation. However, the marks are very low in

―understand‖ questions—the average marks is .63 out of 2 (31.5% marks) which is half of the

marks scored in ―remember‖ questions in the same test and about one-third of the marks

obtained from ―remember‖ questions in regular test.

One other difference is to be noted between regular and post-regular tests. In regular test,

there is no difference of marks between the tests based on short and long conversations.

Logically, as the short conversation is played for a very short period of time, the students can

make the best use of their short term memory and do better in the test than in the test based on

long conversation. However, in regular test, the mean results for long and short conversations are

the same (M=1.63), whereas in post-regular test, where vocabulary is the same but the context is

different, the students did better in short conversation than long conversation. The mean result of

―remember‖ questions in short conversation (M = 1.41) is 24.82% more than that of ―remember‖

questions in long conversation (M = 1.06).

Table 5 shows the differences between regular and post-regular tests through Cohen’s d

(effect size). When the students wrote their memorized answer to the known questions in regular

test, the result was excellent (M=1.63) and its difference from the mean result of ―remember‖

questions of post-regular test (M =1.23) is statistically significant with a large effect size

(d=1.06). However, when the mean result of ―remember‖ questions of regular test is compared

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 209

to that of ―understand‖ questions of post regular test, the effect size of the mean differences is

more than double (d=2.56) the former one.

Discussion

The present study aimed to investigate the difference between the students’ result and

their real achievement at the end of the listening course. The data analysis shows that their

excellent result obtained in memorization based test has limited relevance to their listening skill

development. They did excellent when the test (regular) was based on the texts which they

listened again and again in classroom with the help of transcripts. But when the same vocabulary

in the sentences of same linguistic difficulty was given in different context (in post-regular test),

the students’ performance was found to be poor in identifying words and poorer in

―understanding‖.

Therefore, in answer to research question no. 1: ―What is the students’ preferred learning

strategy in Listening course offered by the Department of English in King Khalid University?‖ :

we have to say that most of the students are doing nothing but memorizing the ready-made

answers to the predictable questions. The answer to research question no. 2: Do the teachers

encourage the students develop other learning strategies?: is that the teachers try but in vain for

―remember‖ based questions. The answer to the last question: ―Does the Listening course

develop students’ other learning domains?‖ is obviously ―no‖. The results of the study show that

―remember‖ based questions cannot even develop ―understand‖, let alone other learning domains

like ―apply‖, ―analyze‖, ―evaluate‖, and ―create‖.

Conclusion

Two conclusions, with some caveats described below, can be drawn from this study.

Firstly, Saudi students do not want to fly their ―nest’ of remember and there is no attempt to

wean them off their dependence on memorization. For this kind of catering to their natural

tendency to adopt the easiest learning strategy that they have been familiar with since the

beginning of their informal education at home, the students are missing their best and probably

the last chance to stop behaving like lotus eaters. Secondly, as the data of this study demonstrate,

memorization does not help the students move to a single step upward to ―understand‖, let alone

other higher learning domains like ―apply‖, ―analyze‖, ―evaluate‖, and ―create‖. This does not

happen probably because the cognitive skills as described above in Tables 1 and 2 are not innate

and cannot be acquired independently by the students (Landsman & Gorski, 2007; Lundquist,

1999; Rippen et al., 2002). Therefore, most of the students remain unaware of critical thinking

skills even after earning the highest degrees.

The study is not without limitations. Firstly, this study was conducted with only 49

participants from one university. A larger sample could tolerate individual variations better in

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 210

statistical analysis and the findings could be generalized to the whole country if the data were

collected from other universities representing all the regions of Saudi Arabia. Secondly, for the

lack of qualitative data regarding teachers’ and students’ attitudes and experience, their opinions

remained unexplored.

Despite these limitations, the results of this study have important pedagogical

implications. It is high time Saudi higher education authorities were aware of the fact that the

current methods of teaching in the universities are not in fact producing any critical thinkers who

can lead the country to ―core‖ zone of world economy (Wallerstein, 2006) where the state

intends to reach by the year 2024 (The Ministry of Economy and Planning, 2006). The present

teaching methods and test patterns are pampering the students with a false satisfaction of doing

very good in exams which is practically useless in real life. Therefore, instead of pampering, the

university authorities should let the students fly their seemingly fruitful and for the time being

safe nest of ―remember‖ so that they have to try to learn the other domains like ―apply‖,

―analyze‖, ―evaluate‖, and ―create‖.

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 211

References

-Al Bukhari. (n.d.). 93: 489. Available at http://www.sahih-

bukhari.com/Pages/Bukhari_9_93.php

-Al-Essa, A. (2009). Education reform in Saudi Arabia between absence of political vision,

apprehesion of the religious culture and disability of educational management (1st ed.). Beirut:

Dar Al Saqi.

-Al Ghamdi, Amani K. Hamdan and Philline M. Deraney. (2013). Effects of teaching critical

thinking to Saudi female university students using a stand-alone course. International Education

Studies, 6 (7). DOI: 10.5539/ies.v6n7p176 .

-Anderson L. W., Krathwohl D. R., Bloom B. S. (2001). A Taxonomy for learning, teaching,

and assessing a revision of Bloom's taxonomy of educational objectives. New York: Longman.

-Al-Miziny, H. (2010). Abdcating of education in Saudi Arabia (1st ed.). Beirut: Arab Diffusion

Company.

-Al-Souk, M. (2009). How can we prepare the teacher? Retrieved from

http://bahaedu.gov.sa/vb/archive/index.php/t-648.html

-Baker, C. (1997). Constructing and reconstructing classroom literacies. In Muspratt, S., Luke,

A. and Freebody, P. (eds.) Constructing Critical Literacies. Cresskill, NJ: Hampton Press.

-Blackwell, A. and Therese Naber, (2006). Open forum 1. London: OUP

-Bloom B. S., Krathwohl D. R., Masia B. B. (1956). Taxonomy of educational objectives: The

classification of educational goals. New York: D. McKay.

-Elyas, T. (2008). The attitude and the impact of the American English as a global language

within the Saudi education system. Novitas-ROYAL, 2(1).

-Elyas, T., and Michelle P. (2010). Saudi Arabian educational history: Impacts on English

language teaching. Education, Business and Society: Contemporary Middle Eastern Issues, V. 3,

No. 2, 201pp.136-145.

-Goodlad, J. (1984). A place called school: Prospects for the future. New York: McGraw-Hill.

-Jamjoon, M.I. (2009). Female Islamic Studies teachers in Saudi Arabia: A phenomenological

study. Teaching and Teacher Education, XX: 1-12.

Volume 2

Issue 1 June 2015

INTERNATIONAL JOURNAL OF HUMANITIES AND CULTURAL STUDIES ISSN 2356-5926

http://ijhcschiefeditor.wix.com/ijhcs Page 212

-Landsman, J., & Gorski, P. (2007). Countering standardization. Educational leadership, 64(8),

40–41.

-Lundquist, R. (1999). Critical thinking and the art of making good mistakes. Teaching in Higher

Education, 4(4), 523–530.

-National qualifications framework for higher education in the Kingdom of Saudi Arabia. (2009).

Available: www.ncaaa.org.sa/siteimages/ProductFiles/29_Product.pdf.

-Rippen, A., Booth, C., Bowie, S., & Jordan, J. (2002). A complex case: Using the case study

method to explore uncertainty and ambiguity in undergraduate business education. Teaching in

Higher Education, 7(4), 429.

-Szyliowicz, J.S. (1973). Education and modernization in the Middle East. London: Cornell

University Press.

-The Ministry of Planning and National Economy. [n.d.]. ―Knowledge-based economy‖ in Ninth

development plan, Kingdom of Saudi Arabia (87-105). Available:

http://www.mep.gov.sa/inetforms/article/Download.jsp

-Tibi, B. (1998). The challenge of fundamentalism: Political islam and the New World disorder.

Berkeley, CA: University of California Press.

-Wallerstein, Immanuel, World system Analysis: An introduction. Durham: Duke University

Press, 2006.