Positing Organizational Effectiveness as a Second-Order Construct in Hong Kong Higher Education...

22
Research in Higher Education, Vol. 44, No. 6, December 2003 ( 2003) POSITING ORGANIZATIONAL EFFECTIVENESS AS A SECOND-ORDER CONSTRUCT IN HONG KONG HIGHER EDUCATION INSTITUTIONS Paula Kwan* , † and Allan Walker** ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: The study examines the relative importance of the various organizational effective- ness dimensions in higher education institutions by positing organizational effective- ness as a second-order construct. Based on the findings of a survey administered to university academics in Hong Kong universities, the second-order structure of organi- zational effectiveness was supported. The findings reflected that the student-related dimensions were not considered as important as the dimensions related to faculty employment and satisfaction and suggested the disproportionate influence of the governing body on universities in Hong Kong. ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: KEY WORDS: organizational effectiveness; second-order construct; higher education re- search. INTRODUCTION Given that governments in many parts of the world have become increasingly determined to make higher education more accountable to the taxpayer, the quest for effectiveness is a pressing concern for many universities (Johnes and Taylor, 1990). However, the question of how to define organizational effective- ness in such institutions is complicated by the nature of the organizations them- selves; they are very different from profit-driven organizations. As noted by Cameron and Whetten (1983), the loosely coupled nature of higher educational institutions and the lack of precise effectiveness indicators for success make the evaluation of effectiveness difficult. Most of the literature in this area provides little more than theoretical deliberation on the topic. One major exception to this is the model developed by Cameron (1978), in which organizational effec- tiveness was reflected in nine dimensions—four relate to students and five to *Division of Commerce, City University of Hong Kong, HKSAR, China. **Department of Educational Administration and Policy, The Chinese University of Hong Kong. †Address correspondence to: Paula Kwan, Division of Commerce, City University of Hong Kong, Tat Chee Avenue, Kowloon Tong, HKSAR, China. E-mail: [email protected] 705 0361-0365/03/1200-0705/0 2003 Human Sciences Press, Inc.

Transcript of Positing Organizational Effectiveness as a Second-Order Construct in Hong Kong Higher Education...

Research in Higher Education, Vol. 44, No. 6, December 2003 ( 2003)

POSITING ORGANIZATIONAL EFFECTIVENESSAS A SECOND-ORDER CONSTRUCT IN HONGKONG HIGHER EDUCATION INSTITUTIONS

Paula Kwan*,† and Allan Walker**

::: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :The study examines the relative importance of the various organizational effective-ness dimensions in higher education institutions by positing organizational effective-ness as a second-order construct. Based on the findings of a survey administered touniversity academics in Hong Kong universities, the second-order structure of organi-zational effectiveness was supported. The findings reflected that the student-relateddimensions were not considered as important as the dimensions related to facultyemployment and satisfaction and suggested the disproportionate influence of thegoverning body on universities in Hong Kong.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::KEY WORDS: organizational effectiveness; second-order construct; higher education re-search.

INTRODUCTION

Given that governments in many parts of the world have become increasinglydetermined to make higher education more accountable to the taxpayer, thequest for effectiveness is a pressing concern for many universities (Johnes andTaylor, 1990). However, the question of how to define organizational effective-ness in such institutions is complicated by the nature of the organizations them-selves; they are very different from profit-driven organizations. As noted byCameron and Whetten (1983), the loosely coupled nature of higher educationalinstitutions and the lack of precise effectiveness indicators for success make theevaluation of effectiveness difficult. Most of the literature in this area provideslittle more than theoretical deliberation on the topic. One major exception tothis is the model developed by Cameron (1978), in which organizational effec-tiveness was reflected in nine dimensions—four relate to students and five to

*Division of Commerce, City University of Hong Kong, HKSAR, China.**Department of Educational Administration and Policy, The Chinese University of Hong Kong.†Address correspondence to: Paula Kwan, Division of Commerce, City University of Hong Kong,

Tat Chee Avenue, Kowloon Tong, HKSAR, China. E-mail: [email protected]

705

0361-0365/03/1200-0705/0 2003 Human Sciences Press, Inc.

706 KWAN AND WALKER

staff and institution issues. Since its development, Cameron’s instrument hasbeen widely replicated in studies in the U.S. higher educational sector and in anumber of contexts outside the United States such as the United Kingdom andAustralia (Lysons and Hatherly, 1992, 1996, 1998; Smart, Hatherly, and Mitch-ell, 1998). It has been linked with organizational culture (Cameron and Etting-ton, 1988; Smart and Hamm, 1993b, Smart and St. John, 1996), mission ori-entations (Fjortoft and Smart, 1994; Smart and Hamm, 1993a), externalenvironments (Cameron and Tschirhart, 1992), organizational decline (Smart,1989), and decisionmaking approaches (Smart, Kuh, and Tierney, 1997). Thefindings of these studies provide almost unanimous support for the model withminor contextually specific modifications. However, no attempt to date has beenmade to test its applicability in an Asian context. Therefore, one of the objec-tives of this study was to address this gap in the literature by validating themodel in the Hong Kong context.

In previous studies, organizational effectiveness is defined as a multidimen-sional phenomenon comprising nine conceptually different but not statisticallyindependent domains (Cameron, 1980, 1981, 1983). The interconnectedness ofthe effectiveness dimensions has been noted by a number of researchers, butno further attempt had been made to explore the relationships. In essence, themultidimensional nature of the organizational effectiveness phenomenon sug-gests that it is a second-order construct (Byran, 1998). A benefit of proposingorganizational effectiveness as a second-order construct is that the relative sig-nificance of each of the dimensions on overall effectiveness can be ascertained.Accordingly, this study attempts to empirically validate the second-order con-struct for organizational effectiveness and to examine the relative value attachedto each of the dimensions by faculty in Hong Kong higher education institutions.More specifically the study attempted to address three research questions:

1. Can Cameron’s model of organizational effectiveness be generalized tohigher education institutions in Hong Kong, or should the dimensions bemodified in line with findings from research in the United Kingdom andAustralia?

2. Can the relationships among the dimensions of organizational effectivenessbe represented by a second-order factor structure?

3. If answer to question (2) is yes, should the factor structure be a one-factoror two-factor second-order structure?

HIGHER EDUCATION SECTOR IN HONG KONG

As a former British colony, the governance of the higher education sector inHong Kong resembles that in the United Kingdom in that all institutions are

707ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

supervised by the University Grants Committee (UGC). Apart from the OpenUniversity of Hong Kong, which is self-financed, there are a total of eight gov-ernment-funded higher education institutions in Hong Kong. They all offer awide range of undergraduate programs, except for the Hong Kong Institute ofEducation that focuses only on the training of teachers. As in the U.K. system,Hong Kong universities are held accountable by the UGC through three mecha-nisms—the Research Assessment Exercises (RAE), the Teaching and LearningQuality Review (TLQPR), and the Management Review (MR).

The RAE requires all academic staff to submit three items representing theirbest research output produced during the preceding 4 years to the ResearchGrants Council (RGC) of the UGC for assessment. Judgments are based onwhether or not the submitted items meet international standards. The RGC alsosets aside a fixed amount for research funding each year and distributes thisamong the institutions based on individual applications; they are assessed on acompetitive basis. Every year the RGC publishes the total sum of researchgrants obtained by members in each institution.

The allocation of funding to universities is primarily based on student num-bers with adjustments for various disciplines and, in part, on the performanceof the institutions. Such performance is mainly measured by the RAE. Giventhe statistically based nature of the RAE results in comparison to the formativenature of the TLQPR and the MR, interinstitutional comparison on the basis ofthe former appears to be more readily achievable and is seen as more objective.The TLQPR and the MR, despite some claims to the contrary, do not appear tohave such a resource allocation function and serve mainly to promote self-as-sessment or self-enhancement only (University Grants Committee of HongKong; UGC, 2000). As such, of the three mechanisms used by the UGC tomonitor and motivate universities, the RAE is the only mechanism throughwhich the institutions are judged and then formally rewarded or sanctioned.Given that universities in Hong Kong are very different from those in the UnitedStates in that they are not typical recipients of large private donations or indus-trial support, they must rely heavily on the UGC for funding. As such, the UGChas a massive influence on all facets of higher education in Hong Kong.

CAMERON’S INSTRUMENT FOR ASSESSINGORGANIZATIONAL EFFECTIVENESS

Based on empirical findings, Cameron (1978) developed a model to assessthe organizational effectiveness of higher educational institutions in the UnitedStates. His model proposes that there are nine a priori dimensions of organiza-tional effectiveness: (a) student education satisfaction, (b) student academic de-velopment, (c) student career development, (d) student personal development,

708 KWAN AND WALKER

(e) faculty and administrator employment satisfaction, (f) professional develop-ment and quality of the faculty, (g) systems openness and community interac-tion, (h) ability to acquire resources, and (i) organizational health.

With a view to supporting the validity of his model, Cameron (1986) intro-duced five predictor variables in his analysis: institutional demographics, insti-tutional strategy, institutional structure, institutional finances, and external en-vironment. Cameron contended that these predictors are found to be stronglyassociated with performance in other types of organisations other than edu-cational institutions, and were additionally linked to long-term performance.Twenty-nine out of the 41 original institutions included in his earlier projectagreed to participate in his later research. Two sets of regression analysis usingthe predictors as independent variables were run, one with the nine dimensionsas dependent variables and the other with the changes in these nine dimensionsin a period of 4 years. The results indicated that the nine dimensions could beindicative of effectiveness beyond a short-term horizon.

REPLICATIONS OF CAMERON’S INSTRUMENT

The applicability of Cameron’s instrument in the United Kingdom and Aus-tralia has also been supported by Lysons and his colleagues. They conducted aseries of analysis on the same range of universities and colleges as those in-cluded in Cameron’s original study; senior university members were also sur-veyed. Using a sample size of 30 colleges, Lysons and Hatherly (1992, 1996,1998) reported a close overall resemblance to Cameron’s findings in the UnitedStates. However, there were two deviations. The first was that the scales of“student academic development” and “ability to acquire resources” were com-bined into one factor. The second deviation was that the scale of “system open-ness and community interaction” was split into two factors. With the exceptionof a low correlation on the student educational satisfaction scale, the internalreliabilities of other scales were within a satisfactory range. Lysons and hiscolleagues (Lysons, 1990a,b, 1993; Lysons and Hatherly, 1992, 1996, 1998)provided a number of plausible explanations for these findings. They claimedthat the association of student academic development with ability to acquireresources could be attributed to the enduring traditions linking reputation andresources in the United Kingdom (Lysons and Hatherly, 1992). Prestigious insti-tutions could more easily acquire resources and would also be perceived to beable to produce higher quality graduates. Splitting system openness and commu-nity interaction helped to differentiate two aspects of the model that had beenconsolidated into one scale: one related to the community interaction activitiesand the other related to educational programs to help meet community needs(Lysons and Hatherly, 1996). The low reliability in the student educational satis-faction scale, according to Lysons and Hatherly (1996), was the result of a

709ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

narrow reliance on the senior university members as respondents to the ques-tionnaire.

Lysons, Hatherly, and Mitchell (1998) also attempted to prove the constructvalidity of the instrument. A discriminant analysis was run using the factorscores of the organizational effectiveness dimensions as independent variables,and it resulted in four groups of institutions. When the nine dimensions weresubstituted by the Times Good University Guide scores on higher educationalinstitutions in the discriminant analysis, the same classification of institutionsemerged. Accordingly, Lysons and Hatherly, and Mitchell (1998) were quitesatisfied with the discriminant validity of the instrument.

Attempts were also made to test Cameron’s instrument in Australia. Basedon the data gathered from 495 individuals from 65 university campuses, Lysonsand Ryder (1988) conducted an empirical test on Cameron’s dimensions. Theyreported that the findings in the Australian context were less encouraging thanthose in the United Kingsom. Three out of the four scales relating to studentswere dropped and only student personal development was retained in the factorstructure. This, according to Lysons and Ryder, might suggest that such attri-butes as “student oriented outcomes were difficult to measure with the dominantcoalition members [senior university members] at the overall [organizational]level of analysis” (p. 330). The scale of ability to acquire resources was split intotwo, and one of them loaded on a common factor with the scales of professionaldevelopment and quality of the faculty. To Lysons and Ryder, the root of thisphenomenon rested with the dual funding system in Australia. In the Australiansystem, resources were essentially allocated centrally according to two criteria:(a) student enrolments and (b) research performance; and this accounted for thetwo-factor structure related to resource acquisition. Therefore, it was logicalthat the research performance component was put together with the professionaldevelopment and quality of the faculty scale.

The series of studies conducted by Lysons and his colleagues have, in general,supported Cameron’s model of effectiveness. Although the results appeared tobe less encouraging in Australia than in the United Kingdom, the discrepanciescan be logically explained by the contextual characteristics. However, no seriousattempt has been made to test its applicability in an Asian context. One of theaims of this study was to address this gap in the literature.

ISSUES OF CONCERN ON CAMERON MODEL

The above discussion suggests that Cameron’s model is capable of reflectingthe phenomenon of organizational effectiveness in higher education institutionsin a specific context. However, there are a number of issues worthy of furtherexamination. One of these issues was reported by Lysons and his colleagues(Lysons and Hatherly, 1992, 1996; Lysons and Ryder, 1988), who found that the

710 KWAN AND WALKER

dimensions relating to student satisfaction failed to emerge as discrete factors intheir analysis. They attributed this to the exclusive reliance on the senior univer-sity members as respondents to the questionnaire. Lysons and Ryder explainedthat the senior members might not have a solid knowledge of student educa-tional satisfaction. They further warned: “whether the perceptions of this strate-gic constituency are supported elsewhere in the organization is unknown” (p.326). Implicitly, their argument suggests that faculty members may not sharethe views of senior management. Unfortunately, they did not attempt to explorethis issue in greater depth. Realising the possible shortcomings of producing afull account of organizational effectiveness based on senior members only, thisstudy attempts to include all faculty members as informants in the survey.

The organizational health scale also appeared to present some problems. Onthe one hand, it emerged as the most consistent outcome from the series ofanalysis; on the other hand, the complexity of the scale rendered its interpreta-tion difficult. In the series of studies by Lysons, the authors treated “organiza-tional health” as an “indicator” rather than a “predictor,” even though they com-mented that “organizational health aspects may be among the most importantissues which predict organizational effectiveness in tertiary institutions” (Lysonsand Ryder, 1988, p. 331).

Goodman, Atkin, and Schoorman (1983) note that the indicator of effective-ness should be differentiated from the determinant in organizational effective-ness studies. To them, the former is the appropriation of the effectiveness con-struct, whereas the latter is the variable that might lead to effectiveness. In theorganizational context, the indicator is the outcome of the management process,and the predictor is the process that leads to effectiveness (Goodman and Pen-nings, 1980). Accordingly, outcomes can be considered as products of the orga-nizations, and processes represent those activities necessary to create the finaloutcomes. A careful examination of the dimensions of organizational healthreveals that they are the processes that lead to effectiveness only. Items such as“communication style,” “flexibility of the administration,” “use of talents andexpertise,” “types of supervision and control,” and “long-term planning and goalsetting” are determinants of effectiveness and cannot be considered as indica-tors. The ultimate effectiveness of an organization is contingent on many pro-cesses, and these measures only constitute part of the whole picture. Interest-ingly, included in Cameron’s dimension of organizational health is an itembearing the same name of “organizational health.” It appears that this term isconsidered a vague construct to Cameron, and thus anything that cannot besubsumed in the other eight dimensions is grouped under this dimension. Inview of the fact that the organizational health dimension is different from allother dimensions in that it is a predictor rather an indicator, it was excludedfrom the study.

711ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

The interconnectedness among the dimensions also appears to be an areaworth further investigation. In previous studies, Cameron (1978) contended that“the dimensions were of high internal consistency and that they were distin-guishable from one another” (p. 625), yet he acknowledged that they were con-ceptually distinct but not necessary statistically independent. Although the closeassociation among these dimensions was also supported in all subsequent stud-ies, no further attempt had been made to explore the relationships. As suggestedby Bryan (1998), the interconnectedness among the dimensions may signify thatthey could be better represented by a second-order factor structure. Therefore,this study attempts to validate the second-order factor structure for organiza-tional effectiveness. In addition, the study further investigated whether a one-factor or a two-factor structure could best represent the second-order structureof organizational effectiveness.

METHODOLOGY

This section describes the conceptual model, the measures, and the partici-pants of the study.

Conceptual Model

The study aimed to test the applicability of Cameron’s model of organiza-tional effectiveness in Hong Kong higher education institutions. In addition, thestudy investigated whether the second-order structure could best be representedby a single factor, representing Cameron’s original model, or two factors, corre-sponding to student and staff-and-institutional issues. The two test models areshown in Fig. 1, and details of the figure will be explained further in the follow-ing sections.

Measures

Based on the foregoing discussion, Cameron’s instrument has proved a usefultool for assessing organizational effectiveness in general, except that the dimen-sion of organizational health is considered to be a determinant rather than anindicator of organizational effectiveness. Therefore, the study excluded Cam-eron’s organizational health dimension. The study also made two additionalmodifications to bring it in line with the Hong Kong context. First, it was con-sidered inappropriate to differentiate local students from national students, as inthe Cameron’s original instrument, given Hong Kong’s size. Therefore, the item“drawing power of national students” was deleted. Second, the item “adaptive-

712 KWAN AND WALKER

FIG. 1. Simplified path diagrams for the two test models.

ness to environment” under the “system openness and community interaction”dimension was considered to be a determinant rather than an indicator of effec-tiveness, and was also deleted. A description of the items included in the instru-ment is shown in the appendix.

713ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

Participants

The questionnaire was sent by post to all teaching staff involved in all depart-ments offering undergraduate programs in seven out of the eight institutions ofhigher education in Hong Kong in mid-April of 2001; a reminder was sent 3weeks later. The Hong Kong Institute of Education was not included in thestudy as it focuses only on the training of teachers. Out of the total 4,066 ques-tionnaire sent, 481 were received, representing a response rate of 12%. Twenty-two returned questionnaires were found to have less than 90% of the itemscompleted, and thus discarded from the statistical analysis. Given that the studywas the first of its kind and that it targeted all teaching staff in higher educationinstitutions in Hong Kong as respondents, there was no benchmark againstwhich this response rate could be compared. In comparison to a response rateof 7.1% in Hong Kong business managers reported by Harzing (2000) in hercomparative study on response rates in 22 countries, the response rate of univer-sity academics in the study was considered acceptable. Moreover, given thatresponses were received from people holding a wide range of academic posi-tions (from chair professors to instructors) and that the distribution of respon-dents by institutions exhibited a fair resemblance to that in the general popula-tion of university staff, the data was taken to provide a fair representation of theviews of faculty members in each institution. A summary of respondents bytheir institutions is reported in Table 1.

As with most quantitative surveys, the response rate of the questionnaire waslargely beyond the control of the researchers. Thus, the results can only repre-sent the perception of the faculty who responded to the survey and cannot claimto be an unbiased or objective reflection of reality.

TABLE 1. Summary of Responses by Institutions

AddressesQuestionnaire Left Questionnaire Partially Valid Response

Sent Employment Returned Completed Questionnaire Rate (in %)(1) (2) (3) (4) (3–4) [3/(1–2)]

Baptist U 335 (8.24%) 0 38 2 36 (7.84%) 11.3CityU 464 (11.41%) 24 48 1 47 (10.24%) 10.9CUHK 891 (21.91%) 19 111 2 109 (23.74%) 12.6HKU 868 (21.35%) 18 113 12 101 (22.00%) 13.1Lingnan 142 (3.49%) 0 18 0 18 (3.92%) 12.7PolyU 946 (23.27%) 9 83 2 81 (17.65%) 8.9HKUST 420 (10.33%) 9 54 0 54 (11.77%) 13.1No personal

particularsgiven 16 3 13 (2.84%)

Total 4066 (100%) 59 481 22 459 (100%) 12.0

714 KWAN AND WALKER

DATA ANALYSIS

This section describes the two stages in the data analysis. The first part fo-cuses on the validation of Cameron’s model in the Hong Kong higher educationcontext. The second part describes the testing of a second-order factor structure.

Validating and Modifying Cameron’s Model in Hong Kong

The Cronbach alphas of the eight dimensions were first assessed for theirinternal consistency. As shown in Table 2, all the reliability alphas were withina satisfactory range (0.7323–0.8704) and thus indicated that the dimensionswere internally consistent.

In most of the previous studies adopting Cameron’s instrument, researchershave confirmed the validity of the instrument solely based on the alpha results(e.g. Cameron and Ettington, 1988; Lysons and Hatherly, 1992, 1996, 1998).This study, however, attempted to confirm the factor structure of the eight di-mensions using LISREL 8 (Joreskog and Sorbom, 1993). The eight dimensionswere represented as the endogenous constructs and the item statements describ-ing each of the dimensions as the exogenous variables for the correspondingconstructs in the model.

The fitness of the model was assessed on the traditional χ2 test, the root-mean-square error of approximation (RMSEA), and its associated confidenceinterval. According to Bryan (1998), a small chi-square to degree of freedomratio (preferably around 2.00–3.00), a small RMSEA value (a figure of 0.05indicating a good fit and 0.08 is the upper limit), and a small confidence intervalindicate a reasonable fit of the data. However, opinions in the literature aredivided as to what is an acceptable RMSEA value. Various authors have sug-gested a number of different thresholds, some are more stringent (0.05 by Stan,

TABLE 2. Reliability alphas of the Eight Dimensions

ReliabilityDimension Alpha

Student Education Satisfaction 0.7323Student Academic Development 0.7579Student Career Development 0.8177Student Personal Development 0.7618Faculty Employment Satisfaction 0.8704Professional Development and Quality of the Faculty 0.7768System Openness and Community Interaction 0.7644Ability to Acquire Resources 0.8035

715ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

quoted in Hayduk and Glaser, 2000), some more lenient (0.08 by Brown andCudeck, 1993; Byran, 1998; Hair, Anderson, Tatham, and Black, 1995), andsome fall somewhere in between (0.06 by Yuan and Bentler, 1998). Haydukand Glaser argued that RMSEA is a sample-size dependent adjustment, andstrict adherence to such might lead researchers to reject model when using largesample sizes. Thus, they maintained that “[RMSEA] is an elastic tap measurethat [researchers], and others, can stretch to let them accept models they wouldlike to accept” (p. 30). In line with concerns of the acceptable level of RMSEA,in this study it was taken as just one of three measures employed as the fitindices. The other two indexes used were the comparative fit index (CFI) andthe non-normed fit index (NNFI), which are recommended for their relativeindependence from sample size and sensitivity in penalizing nonparsimoniousmodels (Marsh, Balla, and McDonald, 1988; Marsh and Hau, 1996). An indexof 0.90 or above for CFI and NNFI is interpreted as an indication of reasonablygood fit.

A confirmatory factor analysis was run in which all the factors were allowedto freely correlate, and the following goodness of fit indexes were found, χ2 =1615.03 (d.f. = 377), RMSEA = 0.088, CFI = 0.80, NNFI = 0.77. The resultsindicated a poor fit of the data, and so the model was revised.

In order to modify and improve the model, the correlation among the eighteffectiveness dimensions (as reported in Table 3) was reviewed in tandem withprevious findings reported by various researchers in other parts of the world. Asdiscussed earlier, two deviations from Cameron’s model have been reported inthe literature. The first deviation was the integration of the dimensions of profes-sional development and quality of the faculty and ability to acquire resources.The second modification was the splitting of the system openness and commu-

TABLE 3. Correlations among the Eight Organizational Effectiveness Dimensions

D1 D2 D3 D4 D5 D6 D7 D8

D1 1.00D2 0.73 1.00D3 0.64 0.66 1.00D4 0.90 0.84 0.67 1.00D5 0.67 0.62 0.56 0.72 1.00D6 0.60 0.60 0.52 0.64 0.89 1.00D7 0.65 0.59 0.60 0.65 0.87 0.89 1.00D8 0.50 0.56 0.46 0.58 0.72 0.95 0.70 1.00

Key: D1 = Student Education Satisfaction; D2 = Student Academic Development; D3 = StudentCareer Development; D4 = Student Personal Development; D5 = Faculty Employment Satisfaction;D6 = Professional Development and Quality of the Faculty; D7 = System Openness and CommunityInteraction; D8 = Ability to Acquire Resources.

716 KWAN AND WALKER

nity interaction dimension into two components. Therefore, these two modifica-tions were tested out in subsequent model respecification. In addition, the highcorrelation found between D1 (student education satisfaction) and D4 (studentpersonal development), as reported in Table 3, also suggested a possible combi-nation of these two dimensions. Therefore, the integration of the two dimensionsconstituted the third possible modification to the model.

In respecifying the structure model of organizational effectiveness, the com-ments made by Byrne (1998) are worth noting. She advocated that once a re-searcher decides to restructure a model, the analysis should be framed within anexploratory, rather than a confirmatory mode. Although such a post hoc modelfitting has been considered problematic by a number of researchers, its advo-cates have argued that it can help the investigator to probe deeper into thefactors accounting for the misfit of a model (Tanaka, 1993). As no previousattempts had been made to validate Cameron’s instrument in the Hong Kongcontext, adopting an exploratory analysis in the study was considered to bejustifiable and, indeed, necessary for the purposes of modifying and validatingCameron’s instrument in the Hong Kong higher education context.

One way to address problems associated with post hoc model fitting is toemploy a cross-validation strategy whereby the final model derived from thepost hoc analysis is tested on a second independent sample from the same popu-lation (Byran, 1998; Cudeck and Browne, 1983). Given the difficulty of acquir-ing a separate sample for validation in most cases, it is suggested that the re-searcher randomly split the available data into two sets, one of which serves asthe calibration set and the other as the validation set (Byran, 1998). The re-searcher can first explore the factor structure with the calibration set, and thevalidity of the model so determined can be confirmed using the validation set.Cudeck and Browne suggest the use of a cross-validation index (CVI) as anindicator for the discrepancy found between the two models: the smaller thevalue, the stronger the evidence of replicability across the two samples. An issueof concern for the splitting of the sample, however, is that the resultant samplesize in each of the data sets should be sufficient to support the number of vari-ables included in the study. In view of the total sample size in the study, theresultant size of the split set was considered to be justifiable for the number ofvariables to be determined (Pedhazur and Schmelkin, 1991).

Cross-Validation of the Organizational Effectiveness Factor Structure

Following the suggestions of Byran (1998), and Cudeck and Browne (1983),the data in the study was randomly split into two sets, one for calibration andthe other for validation purposes. A respecification of the model was first admin-istered to the set of calibration data.

A revised model incorporating the three possible modifications as discussedearlier was composed. The three modifications are: (a) combining ability to

717ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

acquire resources (D8) with professional development and quality of the faculty(D6); (b) integrating student education satisfaction (D1) with student personaldevelopment (D4); and (c) dividing system openness and community interaction(D7) into community interaction and system openness. Furthermore, items inthe combined dimensions were randomly paired as all the reasons provided byMarsh and O’Neill (1984) in justifying item pairing appeared to be applicableto the study. The reasons given by Marsh and O’Neill are

(a) the ratio of the number of subjects to the number of variables is increased; (b)each variable should be more reliable and should have a smaller unique component;(c) the factor loadings should be less affected by the idiosyncratic wording of individ-ual items; (d) the cost of the factor analyses will be substantially reduced; and (e) itbecomes possible to use the method of factor analyses in cases when the number ofitems exceeds either the limitations of commercially available factor analyses or thememory capacity of the available computers. (p.157)

As the interconnectedness among the dimensions has been supported in previousstudy, the dimensions were allowed to freely correlate in the revised structuremodel.

The revised model yielded the following fit statistics, χ2 = 510.63 (d.f. = 209),RMSEA = 0.075, CFI = 0.92, NNFI = 0.91, and all the parameter estimateswere high and statistically significant. Judging from the results, the revised fac-tor structure for the calibration model was considered to be acceptable.

To support the factor structure of this model, a cross-validation test usingSIMPLIS was conducted. The revised model was used as the calibration refer-ence against which the covariance structure of the second set of data was tested.The CVI as reported in the SIMPLIS output was 1.67 with a 90% confidenceinterval of 1.36 to 2.02. In comparison to the CVIs reported in the LISRELoutput for the earlier calibration model (Expected Cross-Validation Index[ECVI] = 2.89, ECVI for saturated model = 2.48, and ECVI for independentmodel = 19.27), the CVI of the validation model was smaller, and thus sug-gested a high likelihood of replicability across the two samples.

To further validate the applicability of the model to the data, a model withthe entire data set was run. The resultant fit statistics, χ2 = 763.90 (d.f. = 209),RMSEA = 0.079, CFI = 0.92, NNFI = 0.91, indicated that the model fit thewhole set of data reasonably well. Therefore, the validity of the instrument wassupported.

In sum, organizational effectiveness in the Hong Kong higher education con-text, as validated by the above analysis, comprises seven dimensions. The sevendimensions are:

1. Student Education Satisfaction and Personal Development (SES);2. Student Academic Development (SAD);3. Student Career Development (SCD);4. Faculty Employment Satisfaction (FES);

718 KWAN AND WALKER

5. Ability to Acquire Resources and Quality of the Faculty (AAR);6. Community Interaction (CI); and7. System Openness (SO).

Having dealt with research question (1), the analysis proceeded with the analysisin relation to research questions (2) and (3).

Testing of the Second-Order Constructof Organizational Effectiveness

As discussed earlier, the multidimensional nature of the organizational effec-tiveness phenomenon suggests that it might be a second-order construct (Byran,1998), therefore the analysis continued to test this.

Two test models were examined in the study; one was a single second-ordermodel and the other a two second-order model (Fig.1). When testing the invari-ance between two models, Marsh (1987) suggested the use of the target coeffi-cient; a figure approaching one indicates that the test model can explain thevariance as well as the base model. In other words, the test model with a highertarget coefficient is considered to provide a better fit of the data.

A base model representing a first order confirmatory factor analysis in whichall the seven dimensions were allowed to freely correlate was run. The factorstructure yielded the following fit statistics, χ2 = 527.30 (d.f. = 203), RMSEA =0.061, CFI = 0.96, NNFI = 0.94, whereas the null model (in which all relation-ships are freely correlated) displayed a χ2 of 7496.83 with 253 degrees offreedom.

Then a second-level factor structure on Test Model One (TM1), with the sevendimensions further loading on a single organizational effectiveness construct,was conducted. The following indexes were obtained: χ2 = 631.29 (d.f. = 217),RMSEA = 0.067, CFI = 0.94, NNFI = 0.93. The second model (TM2), with thestudent-related factors (the first three factors) loaded on one second-order factorand the rest on another second-order factor, was tested. The model yielded thefollowing goodness of fit statistics, χ2 = 770.29 (d.f. = 217), RMSEA = 0.078,CFI = 0.92, NNFI = 0.91.

Based on the above statistics, the target coefficients for TM1 and TM2 werecalculated. Taking the first-order structure and the second-order structure as thebase and the test models respectively, a target coefficient of 0.985 ((7496.82–631.29)/(7496.82–527.30)) was obtained for TM1 and that for TM2 was 0.965((7496.82–770.29)/(7496.29–527.30)). Comparing these two target coefficients,TM1 with one single second-level latent factor was better than TM2 at producinga plausible structure for the first-order model. Therefore, research questions (2)and (3) were answered. The seven dimensions of organizational effectiveness

719ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

could be represented by a single second-order construct of organizational effec-tiveness.

DIMENSIONS OF ORGANIZATIONAL EFFECTIVENESSIN THE HONG KONG HIGHER EDUCATION CONTEXT

The above findings suggested that Cameron’s instrument could be adopted inHong Kong higher education institutions with some modifications. The firstmodification was the subsuming of the student personal development dimensioninto the student education satisfaction dimension. This suggested that studentpersonal development might not be a concern of the academic staff in highereducation institutions in Hong Kong. The findings might indicate that educationinstitutions in Hong Kong tend to rely heavily, if not solely, on students’ aca-demic achievement as a screening mechanism. The combination of these twodimensions reflected that the personal development of students was very muchviewed by faculty members in Hong Kong in the light of offering a more satisfy-ing education experience to students than developing the personal qualities nec-essary for a fulfilling and successful life after graduation. The former puts moreemphasis on the short-term satisfaction of students during their course of studywhereas the latter focuses more on the development of the students in the longerterm. Although there have been increasing calls for “whole-person develop-ment” of students from the public, the personal development of students, asperceived by faculty members, was still more the responsibility of student coun-selling units than that of academic departments.

The second modification was the integration of the professional developmentand quality of the faculty dimension with the ability to acquire resources dimen-sion. This is somewhat similar to Lysons and Ryder’s (1988) findings in theAustralian context. The author reported an integration of these dimensions, andthey attributed such integration to the funding system in Australia in whichstudent enrolments and research output were the two main criteria for resourceallocation. Given that the funding system in Hong Kong higher education insti-tutions is somewhat similar to that in Australia in that it is also based on studentenrolments and research output, the combination of the dimensions of ability toacquire resources and professional development and quality of the faculty in theHong Kong context was explicable.

The third modification was the division of the system openness and commu-nity interaction dimension. This is also somewhat similar to the results reportedby Lysons and Hatherly (1996) in the U.K. context. The authors reported adivision of the dimension of system openness and community interaction inU.K. institutions and asserted that such a division could help to differentiatebetter two aspects of the model, one related to participation of faculty in com-munity activities and the other related to educational programs to satisfy the

720 KWAN AND WALKER

community’s needs. The former covers an institution’s attempt to be more ac-countable to the public whereas the later reflects an individual’s efforts to servethe community. The findings suggest that such a split would also have beenjustified for Hong Kong higher education institutions. Given the growing de-mands for improving the communication skills of graduates and the applicabilityof curricula to job demands from the public, there has been a pressing concernfor higher education institutions to concentrate more on interaction with andadaptation to the external environment (Ming Pao Newspaper, 19 Feb. 2001). Inresponse to calls from the public, higher education institutions consider systemopenness as an emergent and important dimension in achieving effectiveness.Moreover, the inclusion of community service as one of the criteria for evaluat-ing the performance of academic members also inspires faculty members toemphasise the area of community interaction.

THE RELATIVE VALUE ATTACHED TO EACHOF THE ORGANIZATIONAL EFFECTIVENESS DIMENSIONSBY HONG KONG ACADEMICS

As discussed earlier, an advantage of positing organizational effectivenessas a second-order construct is that the relative value attached to each of theorganizational effectiveness dimensions by faculty who responded to the surveycan be ascertained. The respective factor loadings (β) of each of the dimensionson the second-order construct of organizational effectiveness were examined(Table 4).

TABLE 4. Reliability Alphas and Dimension Loadings of the Seven Dimensionsof Organizational Effectiveness

DimensionLoading on

OrganizationalEffectiveness Reliability

Dimension (β) Alphas

Student Education Satisfaction and PersonalDevelopment (SES) 0.77 0.8610

Student Academic Development (SAD) 0.73 0.7579Student Career Development (SCD) 0.67 0.8177Faculty Employment Satisfaction (FES) 0.94 0.7618Ability to Acquire Resources and Quality of the Faculty

(AAR) 0.89 0.8704Community Interaction (CI) 0.85 0.7463System Openness (SO) 0.68 0.7843

721ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

As shown in Table 4, faculty employment satisfaction scores the highest load-ing (β = 0.94) and is followed by quality of the faculty and ability to acquireresources (β = 0.89). However, student career development scores the lowest(β = 0.67).

The findings indicate that, in general, faculty members value more the staff-and-organisation-related dimensions than student-related dimensions. Given thatthe target respondents of the study were faculty members in the seven institu-tions, an emphasis on the staff satisfaction issue is understandable. In addition,it could also be attributable in part to the fact that more satisfied faculty mem-bers were more willing to return the questionnaire. The findings also reflectedthe influence of the RAE on universities and their faculty. Since the introductionof the RAE in Hong Kong a decade ago, the ability of a higher educationinstitution to secure research funds as well as to attract and maintain productiveand active staff members has been the prime concern of universities. Unlikesome universities in the U.S. that rely heavily on donations as a major sourceof income, universities in Hong Kong rely almost solely on the UGC for fund-ing. They are therefore under continuous pressure to adhere to the rules set bythe UGC. Given that the RAE has resource implications for higher educationinstitutions and the student-focused TLQPR has none, the latter thus receiveslittle attention in higher education institutions. Therefore, the relatively highweighting reflected in the dimension of ability to acquire resources and qualityof the faculty is understandable. The findings presented might imply that thecriteria used by the UGC to oversee higher education institutions in Hong Kongmay have adversely affected their development through diverting a dispropor-tionate amount of attention toward research output.

The findings also reflect the increasing value attached to the dimension ofcommunity interaction by faculty members in response to the inclusion of com-munity services as one of the criteria for evaluating the performance of aca-demic members in many higher education institutions in Hong Kong. On theother hand, the findings revealed that the dimension of system openness (β =0.68) did not receive as much attention by faculty member as community inter-action (β = 0.85). The findings suggested that, in general, faculty members didnot endorse the view that the design of curricula should cater for the needs ofindustry, although there have been increasing calls for this.

The scant attention paid to the student-related dimensions by the facultymembers who responded to the survey suggests that Hong Kong higher educa-tion institutions and their faculty are more concerned about satisfying the RAEresults introduced by the UGC. This is understandable given that UGC is themajor, if not the only, source of income of universities in Hong Kong. However,the significant emphasis put on satisfying the UGC’s research demand maysuggest that universities in Hong Kong are not fully responsible to other stake-holders. With reference to Janoski’s (1998) notion on the purposes of higher

722 KWAN AND WALKER

education institutions, responsible universities should be simultaneously ac-countable to the public, private, state, and market spheres. The findings suggestthat universities in Hong Kong should rethink their long-term priorities.

The findings may suggest that academics in Hong Kong who responded tothe survey are pragmatic and more concerned with issues related to their em-ployment than with student-related issues, but the results may also be taken asa reflection of the fact that local academics are pressured to prioritise researchoutput before everything else. The limited attention paid to student-related di-mensions by university academics may call into question the efficacy of theTLPQR and other initiatives to improve university teaching. Although theTLQPR is designed to help universities to review their own teaching and learn-ing processes and to formulate initiatives for improvement, it has not been seenas central as the RAE because it does not hold major resource implications.Even though student evaluations of teaching are mandatory in many universities,the results are often used as evidence against poor performance, rather than tosupport or build good performance (Kwan, 2002). In other words, a poor evalua-tion rating may be an obstacle to the career advancement of an individual, buta high rating may not be a major asset. This may indicate that universities inHong Kong should review their prevailing system of promoting teaching qualityand consider increasing the value of the rewards available to academics for goodteaching.

CONCLUSION

The study has supported the validity of Cameron’s instrument in Hong Kongwith three contextually specific modifications. Moreover, the proposition thatorganizational effectiveness is a second-order construct has supported empiri-cally. Building on a second-order structure, the findings of the study reflect thatthe welfare of students has received less attention from the universities and theirfaculty than it deserves. This may be partly attributable to the heavy reliance ofuniversities on the UGC for funding. Unless all the parties involved can seri-ously reconsider their respective approaches in delivering a quality education inHong Kong higher education institutions, it is unlikely that the benefit of stu-dents can be adequately served.

723ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

APPENDIX I. A Summary of Items Included in the Instrumentfor Assessing Organizational Effectiveness

Construct Item Description

Student • Students enjoy their school lifeEducation • Students maintain a good relationship with facultiesSatisfaction • Students are highly satisfied with their programmes of study

• There is a high student drop-out rateStudent • Students achieve a high level of academic attainment

Academic • Students only aim to get an academic qualification but not acquireDevelopment knowledge

• Student are self-directed learnersStudent Career • Graduates able to secure employment shortly after they graduate

Development • Employed in their relevant fields of study• Highly commended by their employers• Get good salaries in comparison to graduates from the same

discipline in other local universitiesStudent • Students are very civic-minded

Personal • Active in extracurricular activitiesDevelopment • Show high respect for teachers

Faculty • Faculties enjoy teachingemployment • Faculties enjoy conducting researchsatisfaction • Satisfied with their working environment

• My university is a good employerProfessional • My university ranks the highest in research and publication

development amongst all local universities in my fieldand quality of • Faculties have the best qualifications among all local universitiesthe faculty • Faculties are held in high esteem in local academic circles

• My university encourages and supports staff developmentSystem • Faculties are active in various community services

Openness and • Emphasis on meeting the needs of employerscommunity • Faculties enjoy a good reputation with the general publicinteraction • Maintains a good link with industry and other higher education

institutionsAbility to • Can attract the best student applicants

acquire • Can attract and retain good quality staffresources • Outperforms other local universities in securing research funds

• Outperforms other local universities in securing financialsponsorships from industry

724 KWAN AND WALKER

REFERENCES

Brown, M. W., and Cudeck, R. (1993) Alternative ways of assessing model fit. In: Bol-len, K. A., and Long, J. S. (eds.), Testing Structural Equation Models, Sage Publica-tions, Newbury Park, CA, pp. 136–162.

Byran, B. M. (1998). Structural Equation Modeling with LISREL, PRELIS, and SIMPLIS:Basic Concepts, Applications, and Programming, Lawrence Erlbaum Associates, Mah-wah, NJ.

Cameron, K. S. (1978). Measuring organizational effectiveness in institutions of highereducation. Administrative Science Quarterly 23: 604–632.

Cameron, K. S. (1980). Critical questions in assessing organizational effectiveness. Or-ganizational Dynamics 9: 66–80.

Cameron, K. S. (1981). Domains of organizational effectiveness in colleges and universi-ties. Academy of Management Journal 24: 25–47.

Cameron, K. S. (1983). Strategic responses to conditions of decline: Higher educationand the private sector. Journal of Higher Education 54: 359–380.

Cameron, K. S. (1986). A study of organizational effectiveness and its predictors. Man-agement Science 32(1): 87–112.

Cameron, K. S., and Ettington, D. R. (1988). The conceptual foundations of organiza-tional culture. In: Smart, J. C. (ed.), Higher Education: Handbook of Theory andResearch (Vol. 4), Agathon Press, New York, pp. 356–396.

Cameron, K. S., and Tschirhart, M. (1992). Postindustrial environments and organiza-tional effectiveness in Colleges and Universities. Journal of Higher Education 63(1):87–108.

Cameron, K. S., and Whetten, D. A. (1983). Organizational Effectiveness: A Comparisonof Multiple Models, Academic Press, New York.

Cudeck, R., and Browne, M. W. (1983). Cross-validation of covariance structures. Multi-variate Behavioural Research 18: 147–167.

Fjortoft, N., and Smart, J. C. (1994). Enhancing organizational effectiveness: The impor-tance of culture type and mission agreement. Higher Education 27: 429–447.

Goodman, P. S., Atkin, R. S., and Schoorman, F. D. (1983). On the demise of organiza-tional effectiveness studies. In: Cameron, K. S., and Whetten, D. A. (eds.), Organiza-tional Effectiveness: A Comparison of Multiple Models, Academic Press, New York,pp.163–183.

Goodman, P. S., and Pennings, J. M. (1980). Critical issues in assessing organizationaleffectiveness. In: Lawler, III, E. E., Nadler, D. A., and Cammann, C. (eds.), Organiza-tional Assessment: Perspectives on the Measurement of Organizational Behaviour andthe Quality of Working Life, Wiley, New York, pp. 108–153.

Hair, J. F., Jr., Anderson, R. E., Tatham, R. L., and Black, W. C. (1995). MultivariateData Analysis with Readings, Prentice Hall, Upper Saddle River, NJ.

Harzing, A.-W. (2000). Cross-national industrial mail surveys—why do response ratesdiffer between countries? Industrial Marketing Management 29: 243–254.

Hayduk, L. A., and Glaser, D. N. (2000). Jiving the four-step, waltzing around factoranalysis, and other serious fun. Structural Equation Modeling 7(1): 1–35.

Janoski, T. (1998). Citizenship and Civil Society: A Framework of Rights and Obliga-tions in Liberal, Traditional, and Social Democratic Regimes, Cambridge, CambridgeUniversity Press.

Johnes, J., and Taylor, J. (1990). Performance Indicators in Higher Education, OpenUniversity Press, Bristol, PA.

Joreskog, K. G., and Sorbom, D. (1993). LISREL 8 User’s Reference Guide, ScientificSoftware International, Chicago.

725ORGANIZATIONAL EFFECTIVENESS IN HONG KONG HIGHER EDUCATION

Kwan, P. (2002). An investigation of the relationship between organizational culture andorganizational effectiveness in Hong Kong higher education institutions, Doctoratedissertation, The Chinese University of Hong Kong, HKSAR, China.

Lysons, A. (1990a). Taxonomies of higher educational institutions predicted from organi-sation climate. Research in Higher Education 31: 115–128.

Lysons, A. (1990b). Dimensions and domains of organizational effectiveness in Austra-lian higher education. Higher Education: The International Journal of Higher Educa-tion and Educational Planning 20(3): 287–300.

Lysons, A. (1993). The typology of organizational effectiveness in Australian highereducation. Research in Higher Education 34: 465–487.

Lysons, A., and Hatherly, D. J. (1992). Cameron’s dimensions of effectiveness in highereducation in the U.K.: A cross-cultural comparison. Higher Education: The Interna-tional Journal of Higher Education and Educational Planning 23(3): 221–230.

Lysons, A., and Hatherly, D. (1996). Predicting a taxonomy of organizational effective-ness in U.K. higher educational institutions. Higher Education 32: 23–39.

Lysons, A., and Hatherly, D. (1998). Comparison of measures of organizational effec-tiveness in U.K. higher educational institutions. Higher Education 36(1): 1–19.

Lysons, A., Hatherly, D., and Mitchell, D. A. (1998). Comparison of measures of organi-zational effectiveness in U.K. higher education. Higher Education 36(1): 1–19.

Lysons, A., and Ryder, P. A. (1988). An empirical test of Cameron’s dimensions ofeffectiveness: Implications for Australian tertiary institutions. Higher Education: TheInternational Journal of Higher Educational Planning 18(6): 697–705.

Marsh, H. W. (1987). The hierarchical structure of self-concept and the application ofhierarchical confirmatory factor analysis. Journal of Educational Measurement 24(1):17–39.

Marsh, H. W., Balla, J. R., and McDonald, R. P. (1988). Goodness-of-fit indexes inconfirmatory factor analysis: The effect of sample size. Psychological Bulletin 103(3):391–410.

Marsh, H. W., and Hau, K. T. (1996). Assessing goodness of fit: Is parsimony alwaysdesirable? Journal of Experimental Education 64(4): 364–390.

Marsh, H. W., and O’Neill, R. (1984). Self description questionnaire III: The constructvalidity of multidimensional self-concept ratings by late adolescents. Journal of Edu-cational Measurement 21(2): 153–174.

Ming Pao (2001 February, 19). Criticism on graduates’ poor communication skills byemployers. (In Chinese).

Pedhazur, E. J., and Schmelkin, L. P. (1991). Measurement, Design and Analysis: AnIntegrated Approach, Lawrence Erlbaum Associates Inc., Hillsdale, NJ.

Smart, J. C. (1989). Organizational decline and effectiveness in private higher education.Research in Higher Education 30: 387–402.

Smart, J. C., and Hamm, R. E. (1993a). Organizational effectiveness and mission orienta-tions of two-year colleges. Research in Higher Education 34: 489–502.

Smart, J. C., and Hamm, R. E. (1993b). Organizational culture and effectiveness in two-year colleges. Research in Higher Education 34: 95–106.

Smart, J. C., Kuh, G. D., and Tierney, W. G. (1997). The roles of institutional culturesand decision approaches in promoting organizational effectiveness in two-year col-leges. Journal of Higher Education 68(3): 256–281.

Smart, J. C., Hatherly, D., and Mitchell, D. A. (1998). Comparison of measures of orga-nizational effectiveness in U.K. higher education. Higher Education 36: 1–19.

Smart, J. C., and St. John, E. P. (1996). Organizational culture and effectiveness inhigher education: A test of the culture type and strong culture hypotheses. EducationalEvaluation and Policy Analysis 18(3): 219–241.

726 KWAN AND WALKER

Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. In:Bollen, J. A., and Long, J. S. (eds.), Testing Structural Equation Models, Sage, New-bury Park, CA, pp. 10–39.

University Grants Committee of Hong Kong (2000). UGC management reviews overar-ching report, Hong Kong Government Printer, Hong Kong.

Yuan, K.-H., and Bentler, P. M. (1998). Structural equation modeling with robust covari-ances. Sociological Methodology 28(1): 363–396.

Received June 7, 2002.