Who Submits Work to JMCQ and Why? A Demographic Profile and Belief Summary

13
http://jmq.sagepub.com/ Communication Quarterly Journalism & Mass http://jmq.sagepub.com/content/91/1/5 The online version of this article can be found at: DOI: 10.1177/1077699013520195 2014 91: 5 Journalism & Mass Communication Quarterly Brendan R. Watson and Daniel Riffe Summary Who Submits Work to JMCQ and Why? A Demographic Profile and Belief Published by: http://www.sagepublications.com On behalf of: Association for Education in Journalism & Mass Communication at: can be found Journalism & Mass Communication Quarterly Additional services and information for http://jmq.sagepub.com/cgi/alerts Email Alerts: http://jmq.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: What is This? - Feb 18, 2014 Version of Record >> by guest on March 4, 2014 jmq.sagepub.com Downloaded from by guest on March 4, 2014 jmq.sagepub.com Downloaded from

Transcript of Who Submits Work to JMCQ and Why? A Demographic Profile and Belief Summary

http://jmq.sagepub.com/Communication Quarterly

Journalism & Mass

http://jmq.sagepub.com/content/91/1/5The online version of this article can be found at:

 DOI: 10.1177/1077699013520195

2014 91: 5Journalism & Mass Communication QuarterlyBrendan R. Watson and Daniel Riffe

SummaryWho Submits Work to JMCQ and Why? A Demographic Profile and Belief

  

Published by:

http://www.sagepublications.com

On behalf of: 

  Association for Education in Journalism & Mass Communication

at: can be foundJournalism & Mass Communication QuarterlyAdditional services and information for

   

  http://jmq.sagepub.com/cgi/alertsEmail Alerts:

 

http://jmq.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

What is This? 

- Feb 18, 2014Version of Record >>

by guest on March 4, 2014jmq.sagepub.comDownloaded from by guest on March 4, 2014jmq.sagepub.comDownloaded from

Journalism & Mass Communication Quarterly2014, Vol. 91(1) 5 –16

© 2014 AEJMCReprints and permissions:

sagepub.com/journalsPermissions.nav DOI: 10.1177/1077699013520195

jmcq.sagepub.com

Editorial Report

Who Submits Work to JMCQ and Why? A Demographic Profile and Belief Summary

Brendan R. Watson1 and Daniel Riffe2

This article reports results of a survey of authors of submissions to Quarterly over a five-year period. The goal was to take stock of who the journal’s contributors are and to get a sense of their evaluation of the peer review process. In addition to describing authors who submit their work (faculty rank, academic degrees, years in teaching, etc.), we chose to examine their views on peer review because of concerns in the literature—in journalism and mass communication (J/MC) and the academy in general—with the process.

Studies have shown the process can have inconsistent standards,1 an inability to catch mistakes2 or detect fraud,3 a confirmatory bias,4 and even bias against female authors.5 Given demographic shifts in the academy—women outnumber men among those earning doctorates, though they remain outnumbered among tenure-track faculty6—examining perceived fairness of peer review is timely. This study investigates perceived biases against specific research approaches, but also per-ceived gender bias.7

We also see these data as providing an “update” to a number of previous studies of J/MC scholars’ perceptions of peer review. Ryan compared journal referees and J/MC faculties’ rankings of sixty evaluation criteria.8 While Ryan asked about peer review “in principle,” Leslie had AEJMC (Association for Education in Journalism and Mass Communication) members report their levels of satisfaction with peer review “in prac-tice.”9 Leslie called it “startling—and troubling” that the highest-rated practice is essentially clerical: “acknowledgment of receipt of your article”! Leslie also solicited open-ended responses, which included descriptions of an “old boys” network and

1University of Minnesota-Twin Cities.2University of North Carolina at Chapel Hill.

Corresponding Author:Daniel Riffe, School of Journalism and Mass Communication, University of North Carolina-Chapel Hill, 383 Carroll Hall, Chapel Hill, NC 27599-3365, USA. Email: [email protected]

520195 JMQXXX10.1177/1077699013520195Journalism & Mass Communication QuarterlyWatson and Rifferesearch-article2014

by guest on March 4, 2014jmq.sagepub.comDownloaded from

6 Journalism & Mass Communication Quarterly 91(1)

comments that J/MC research is “too quantitative” and “based on social science meth-odologies.” Surveying AEJMC members, Poindexter found women more likely than men to rate “bias against methods” and “bias against topics” as threats to the peer review process.10

Surveys in other disciplines have found the strongest predictor of satisfaction is simply whether one’s latest manuscript was accepted.11 The authors concluded that peer review is seen as a “hurdle” rather than an opportunity to obtain advice and assis-tance.12 This “hurdle vs. opportunity” issue has not been addressed within J/MC. Thus, we pose three research questions:

RQ1: What is the demographic profile of authors submitting to Journalism & Mass Communication Quarterly (JMCQ)?RQ2: How do authors evaluate JMCQ’s peer review process, compared with mass communication journals “generally”?RQ3: What individual characteristics best predict satisfaction with the peer review process?

Method

Design and Sample

Authors who had submitted at least one manuscript to JMCQ from 2005 to 2010 were invited to complete a web-based survey about the review process for the last article they submitted to JMCQ and about review processes of other mass communication journals. A link was emailed in fall 2010 to 714 authors; five reminders yielded 377 (52.8%) responses (330 were fully completed). To secure human subjects approval, all identifying information was removed once survey responses were received. No iden-tifying information was ever shared with any journal personnel, a fact emphasized to respondents.

Respondents indicated the decision (accept, reject, revise) on their last JMCQ sub-mission, how many articles they submitted and had accepted by JMCQ and other mass communication journals during 2005-2010, and their total career peer-reviewed pub-lications (as sole or co-authors). They reported hours per week spent on research and percentage of work effort devoted to research. Each identified a preferred research approach: qualitative, quantitative, mixed-methods, or “other.” Finally, respondents indicated their age, sex, years of experience in higher education, highest degree, aca-demic rank, and tenure status.

To measure beliefs about the peer review process, respondents used a 7-point scale (7 = strongly agree) with seventeen statements adapted from previous studies13 about peer review (see Table 2 for wording and descriptive statistics), ranging from admin-istrative processes to the substance of reviewer comments, to perceptions of bias and whether reviewer comments were helpful in improving one’s work. Respondents com-pleted the battery of seventeen items for “mass communication journals” before com-pleting the same battery for JMCQ.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

Watson and Riffe 7

Based on principal components analyses, eleven of the seventeen items loaded on a single “Overall Satisfaction” component for both JMCQ (α = .972) and “mass com-munication journals” (α = .952), while three loaded on an “Overall Bias” component (JMCQ, α = .917; “mass communication journals,” α = .837). While three questions failed to load, one (“I was satisfied with the time it took to complete the initial review of my submission”) was retained as a single-item measure.

Results

RQ1 asked about the demographic profile of authors submitting to JMCQ. As Table 1 shows, the average respondent was forty-four and had spent nearly eleven years in higher education. Fifty-six percent were men and most (87.6%) held doctorates. Twenty-six (8.0%) were grad students, more than half (51.9%) were associate or full professors, and 52.8% were tenured. Respondents averaged 17.8 hours weekly on research, which constituted about 38% of work effort. A plurality (44.4%) preferred quantitative methods, compared with 38.3% mixed-methods and 14.8% qualitative methods.

Respondents averaged 15.5 articles published in peer-reviewed journals across their career, had submitted 9.9 articles to peer-reviewed journals in the past five years, and had 7.5 accepted. Each respondent’s five-year “success rate” was computed as follows: number of manuscripts published divided by number submitted—the sample-wide average was 0.711 with a SD = 0.272.

They had submitted an average of 1.8 articles to JMCQ in the past five years, of which 0.70 had been accepted (average JMCQ “success rate” = 0.325, SD = 0.411). The fact that half are at senior rank and tenured, and that all had submitted a manu-script to JMCQ in the past five years, suggests that these are experienced scholars.

Table 1 data also reveal similarities, and some differences, between male and female contributors. Notably, there are no statistically significant differences on a number of individual characteristics—tenure status, rank (despite a greater percentage of female graduate students and assistant professors), degree attainment—as well as time (in hours per week) and effort (as a percentage of total) devoted to research and preferred research approach. Generally speaking, male and female contributors to JMCQ have been similarly active as researchers during the last five years (in terms of similar numbers of articles submitted and published in JMCQ and in other peer-reviewed journals).

Career-wise, however, men in the sample had significantly more total refereed journal publications (17.5) than women (12.9), a “gender gap” in productivity that mirrors age differences: female respondents were significantly younger than their male counterparts and had significantly less academic experience. Judging from self-reported data for this sample, then, any gender difference in research productivity is a function of the age and greater accumulated experience of the males in the sample.RQ2 asked how this sample of J/MC scholars evaluated JMCQ’s peer review pro-

cess. As shown in Table 2 data for the eleven satisfaction measures, authors rated JMCQ most favorably in terms of politeness (M = 4.96/7, SD = 1.552) and clarity of

by guest on March 4, 2014jmq.sagepub.comDownloaded from

8 Journalism & Mass Communication Quarterly 91(1)

Table 1. Descriptive Statistics on Academic Variables, Age and Years on Faculty, and Research Activity, by Gender.

Total (N = 318) Men (n = 178) Women (n = 140)

% % %

Tenured 52.8 56.7 47.9Rank Professor 17.6 20.8 13.5 Associate professor 34.3 35.0 33.3 Assistant professor 32.7 28.4 38.3 Other 3.4 4.3 2.1 Graduate student 8.0 6.0 10.6Degree Doctorate 87.6 88.5 86.5 Master’s 10.8 9.3 12.8 Other 1.5 2.2 0.7Research orientation Quantitative methods 44.4 45.6 43.0 Mixed methods 38.3 40.1 35.9 Qualitative methods 14.8 13.2 16.9 Other 2.5 1.1 4.2

M Ma Ma

Age in years 44.0 45.2 42.4Age in faculty position 10.6 11.7 9.2Career total refereed journal articles 15.5 17.5 12.9JMCQ submissions last five years 1.8 1.8 1.8JMCQ acceptances last five years 0.7 0.6 0.8Other journal submissions, last five 9.9 10.4 9.4Other journal acceptances, last five 7.5 7.6 7.3Hours per week for research 17.8 17.9 17.6Percentage work effort: Research 37.8 38.0 37.3Percentage work effort: Service 18.3 18.6 17.8

Note. JMCQ = Journalism & Mass Communication Quarterly.aMeans joined by common underscore are significantly different, by t-test, p ≤ .05.

reviewers’ comments (M = 4.75/7, SD = 1.501), but rated the journal’s reviewing pro-cess least favorably in terms of its contribution to subsequent scholarship (M = 3.98/7, SD = 1.760). Based on an average across all eleven measures (i.e., Table 2’s “Overall Satisfaction”), authors are slightly more positive than negative in their view of the journal’s peer review process (M = 4.47/7, SD = 1.54), with ten of the eleven compo-nent items garnering mean scores above the midpoint.RQ2 also asked how JMCQ’s peer review process compared with other mass com-

munication journals’ processes. First, note that all eleven satisfaction items’ means for mass communication journals were above the scale midpoint. As indicated in Table 2’s

by guest on March 4, 2014jmq.sagepub.comDownloaded from

Watson and Riffe 9

Table 2. Peer Review Survey Items,a and Descriptive Statistics and Percentage of Agreement, by JMCQ Comparedb with Mass Communication Journals Generally (No. of Cases = 315-322).

JMCQ Mass communication journals

MStrongly

disagree (%)Strongly agree (%) M

Strongly disagree (%)

Strongly agree (%)

“Overall Satisfaction” (eleven-item average)

4.47 8.2* 37.6 4.45 4.2 31.5

How would you rate your satisfaction with the peer review process?

4.23 22.4** 30.9 4.37 12.4 24.8

How satisfied were you with the editor’s letter explaining the editorial decision.

4.63* 17.6 39.4** 4.41 13.3 27.3

I appreciated the thoroughness of reviewers’ feedback.

4.43 15.8 30.6* 4.33 13.3 23.0

Reviewers’ comments focused on the substance and presentation of my research.

4.52 14.8 31.8* 4.54 10.9 23.9

Reviewers’ comments were clear and understandable.

4.75* 9.4 34.5** 4.56 8.5 22.4

Reviewers’ comments were polite and professional.

4.96** 7.9 40.0** 4.72 7.6 29.4

Reviewers’ comments reflected close and careful consideration of my manuscript.

4.38 16.7 28.8* 4.30 14.2 20.9

Reviewers’ comments were helpful in improving the manuscript.

4.31 20.3 31.5 4.56** 11.8 24.8

The peer review process has enhanced my scholarship.

4.32 18.8** 28.2 4.62** 10.6 30.6

Reviewers’ expertise was reflected in their editorial comments.

4.47 14.8 28.5** 4.32 12.1 17.0

Reviewers’ comments have been helpful in improving my subsequent work.

3.98 23.0** 21.2 4.34** 12.7 20.6

“Overall Bias” (three-item average)

3.91 14.5 18.5 3.90 11.2 13.9

Reviewers are open to a variety of methods.

3.81 18.5 13.6 3.94 19.4 16.1

Reviewers are open to a variety of topics/subjects.

3.98 17.9 18.8 3.95 16.1 14.2

Reviewers are open to different ideological perspectives.

3.95 18.5 16.7** 3.81 18.8 10.3

I was satisfied with the time it took to complete the initial review of my submission.

4.44** 20 34.5** 3.86 20.6 14.8

Note. JMCQ = Journalism & Mass Communication Quarterly.aItems rated on a 7-point Likert-type scale (1 = strongly disagree and 7 = strongly agree).bAsterisks indicate significant differences; differences between means were tested with paired-samples t-tests; differences between percentages were tested using chi-square tests.*p < .05. **p < .01.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

10 Journalism & Mass Communication Quarterly 91(1)

comparisons of means (paired-samples t-test), there were no significant differences between authors’ “Overall Satisfaction” with JMCQ’s or with mass communication journals’ peer review processes (JMCQ, M = 4.47; mass communication journals, M = 4.45), or their perceptions that the review processes were free of bias (JMCQ “Overall Bias,” M = 3.91; mass communication journals, M = 3.90).

JMCQ was viewed more positively than mass communication journals in terms of administrative performance—the time it takes to complete the initial review of a man-uscript (JMCQ, M = 4.44; mass communication journals, M = 3.86) and correspon-dence explaining the editorial decision (JMCQ, M = 4.63; mass communication journals, M = 4.41). Authors also rated JMCQ more positively for clarity (JMCQ, M = 4.75; mass communication journals, M = 4.56) and politeness of reviewers’ comments (JMCQ, M = 4.96; mass communication journals, M = 4.72).

Authors rated JMCQ more negatively in terms of how helpful peer review was for the reviewed manuscripts (JMCQ, M = 4.31; mass communication journals, M = 4.56), sub-sequent manuscripts (JMCQ, M = 3.98; mass communication journals, M = 4.34), or their scholarship as a whole (JMCQ, M = 4.32; mass communication journals, M = 4.62).

Comparing item means, however, tells only part of the story. Examining the per-centage data contrasting JMCQ with “mass communication journals,” one discovers that authors were more likely to “strongly agree” with eight out of thirteen individual statements about JMCQ’s peer review process. However, there was also a signifi-cantly larger percentage of respondents who strongly disagreed with the statements that “JMCQ’s peer review process has enhanced my scholarship” (JMCQ, 18.8%; mass communication journals, 10.6%) and “JMCQ reviewers’ comments have been helpful in improving my subsequent work” (JMCQ, 23.0%; mass communication journals, 12.7%) than the percentage who agreed.RQ3 asked what individual characteristics and dimensions of the peer review pro-

cess best predict authors’ satisfaction with JMCQ’s peer review process. To answer this question, bivariate correlations were examined (see Table 3) and then author sat-isfaction with JMCQ’s peer review process was predicted using hierarchical regres-sion. (Some individual characteristics were moderately or strongly correlated, for example, age with years as a faculty member, and hours and percentage effort devoted to research. To avoid redundancy in the models, only tenure status and time devoted to research were used.)

Dependent measures in the models were the summed “Overall Satisfaction” (Model 3) and “Overall Bias” (Model 2) measures, and satisfaction with the time of the review (Model 1). The first two models use as predictor individual characteristics and whether the last manuscript was rejected (see Table 4). The third model used the individual characteristics, whether the submission was rejected, and assessment of the time for review and the overall bias score. Whether quantitative research was one’s preferred approach was also used as a predictor because previous surveys raised the concern, particularly among female respondents, that J/MC journals are biased against non-quantitative methods.

As shown in Table 4, the first regression model predicted 10.3% of variance in authors’ satisfaction with the time required to review their submission to JMCQ. Being

by guest on March 4, 2014jmq.sagepub.comDownloaded from

Watson and Riffe 11

female (β = .158, p < .01) and tenured (β = .118, p < .05) predicted greater satisfaction with the time for the review. Being tenured in particular may make an individual less anxious/more patient to receive a peer review decision.

Having the submission rejected predicted an additional 6.4% of the variance in authors’ satisfaction with the review time. Authors whose manuscripts were rejected after the initial review were significantly less satisfied with the time it took to receive that decision (β = −.253, p < .001), perhaps reflecting a preference for receiving even a negative decision quickly to move on and resubmit the manuscript elsewhere.

The second regression model predicted 14.2% of variance in authors’ perceptions that JMCQ’s peer review process is free of “Overall Bias.” Individual characteristics predicted 5.5% of variance. Non-quantitative researchers were significantly less likely to perceive the review process as unbiased (β = −.209, p < .001), a relationship that merits expansion. Non-quantitative authors (60%) in the sample were in fact more likely to report having their last article rejected by JMCQ than were quantitative authors (40%), but that difference was not statistically significant, χ2(1, N = 317) = 3.174, p > .05.

However, if one uses t-tests to examine differences in computed “success rates” over the past five years, quantitative researchers’ success rates for JMCQ are

Table 3. Correlations (Pearson’s r) between Key Peer Review Variables and Author Characteristics (Pairwise Deletion Used; No of cases = 267-325).

Variable 1 2 3 4 5 6 7 8

1. “Overall Satisfaction” with JMCQ review

2. “Overall Bias” in JMCQ review

.531** —

3. Satisfaction with time to complete review

.480** .276** —

4. JMCQ submission rejecteda

−.478** −.321** −.284** —

5. Tenure statusb −.089 −.087 .096 −.039 — 6. Hours per week

on research−.005 .034 .064 −.001 −.131* —

7. Gender femalec .180** −.019 .161** −.018 −.088 −.011 — 8. Non-quantitative

preferredd−.161** −.268** −.059 .123* .034 −.165** −.005 —

Note. JMCQ = Journalism & Mass Communication Quarterly.aDummy-coded; 1 = rejected, 0 = accepted or invited to revise and resubmit.bDummy-coded; 1 = tenured, 2 = not tenured.cDummy-coded; 1 = female, 0 = male.dDummy-coded; 1 = qualitative or mixed-methods researcher; 0 = quantitative researcher.*p < .05 level. **p < .01 level.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

12 Journalism & Mass Communication Quarterly 91(1)

significantly higher (M = 0.424, SD = 0.423) than they are for non-quantitative researchers (M = 0.245, SD = 0.383), t(1,313) = 3.954, p < .001. The opposite is true for mass communication journals generally, quantitative M = 0.687, SD = 0.269; non-quantitative M = 0.746, SD = 0.258; t(1,313) = −1.979, p < .05.

The outcome of the peer review process explained an additional 8.7% of the vari-ance in authors’ perception that JMCQ’s peer review process is unbiased. Authors whose manuscripts were rejected were significantly more likely to view the process as biased (β = −.295, p < .001).

Finally, the third regression model predicted 58.7% of the variance in authors’ “Overall Satisfaction” with JMCQ’s peer review process. Individual characteristics explained 3.5% of the variance. However, only author gender was significant: female authors are more likely to be satisfied with the journal’s peer review process (β = .101, p < .05).

Author satisfaction with the other dimensions of peer review—the length of the review process and the lack of bias—was important, explaining an additional 45.7% of variance in “Overall Satisfaction.” Satisfaction with time of review (β = .286, p < .001) and perceived lack of bias (β = .404, p < .001) were both very positively associ-ated with “Overall Satisfaction.”

Table 4. Regression Models for Author Satisfaction with JMCQ’s Peer Review Process.

Dependent variables: Satisfaction with time to review

JMCQ review is unbiased

Overall satisfaction with

JMCQ review

Predictor variables β β β

Individual characteristics R2 .039* .055*** .035* Female authora .158** .011 .101* Non-quantitative researcherb .028 −.209*** .018 Hours per week dedicated to

research.048 −.011 −.050

Tenured authorc .118* −.098 −.059Dimensions of peer review ΔR2 .457*** Satisfaction with time to complete

review.286***

JMCQ review is unbiased .404***Editorial decision ΔR2 .064*** .087*** .096*** JMCQ manuscript rejectedd −.253*** −.295*** −.329***Total R2 .103*** .142*** .587***No. of cases 284 265 265

Note. Standardized betas. JMCQ = Journalism & Mass Communication Quarterly.aDummy-coded; 1 = female, 0 = male.bDummy-coded; 1 = qualitative or mixed-methods researcher; 0 = quantitative researcher.cDummy-coded; 1 = tenured, 2 = not tenured.dDummy-coded; 1 = rejected, 0 = accepted or invited to revise and resubmit.*p < .05. **p < .01 level. ***p < .001 level.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

Watson and Riffe 13

Finally, whether an author’s manuscript was rejected explained an additional 9.6% of variance; rejected authors were significantly less satisfied with the process (β = −.329, p < .001).

We began this report by considering whether author gender would relate to percep-tions of the peer review process. As shown above, female authors had more positive perceptions of JMCQ’s peer review process: in Model 1, they were significantly more likely to be satisfied with the time it took to review their manuscript (β = .158, p < .01), and they were in Model 3 more satisfied overall (β = .101, p < .05). However, gender was not significantly associated with Model 2 perceptions of bias in JMCQ’s peer review process (β = .011, p > .05).

Given the significant effects of both gender and editorial decision, it makes sense to probe further the relationship between these two variables. Table 5 examines gen-der, acceptance or rejection, overall satisfaction with the peer review process, and perceptions that the peer review process improved one’s work. A one-way analysis of variance (ANOVA) shows a significant relationship among gender, manuscript accep-tance/rejection, and overall satisfaction with the peer review process. Post hoc tests show no significant difference in overall satisfaction between male (M = 5.24) and female authors (M = 5.50) whose articles were accepted.

However, men who had articles rejected had a more negative reaction and were significantly more negative in rating the peer review experience (M = 3.45) than their female counterparts who were rejected (M = 4.14). Indeed, female authors who had articles rejected were more likely than males who had articles rejected to say that the

Table 5. Gender Differences in Satisfaction with the Peer Review Process by Whether Submission to JMCQ was Rejected (one-way ANOVAs).

Male accept/revise and resubmita

Female accept/revise and resubmitb Male rejectc

Female rejectd F(total df )

Overall satisfaction with JMCQ peer review

5.24c,d 5.50c,d 3.45a,b,d 4.14a,b,c 40.98*

SD 1.056 1.146 1.427 1.482 (290)JMCQ peer review

improved manuscript

5.21c,d 5.48c,d 3.14a,b,d 3.93a,b,c 37.71*

SD 1.303 1.327 1.729 1.911 (311)JMCQ peer review

improved subsequent work

4.60c,d 4.98c,d 3.08a,b,d 3.78a,b,c 22.873*

SD 1.379 1.488 1.730 1.734 (312)

Note. All variables were scored on a 7-point Likert-type scale (1 = strongly disagree, 7 = strongly agree). Because of unequal variances, Games-Howell post hoc tests were used to test for significant mean differences between groups; common subscripts indicate significant between-group mean differences in rows. JMCQ = Journalism & Mass Communication Quarterly; ANOVA = analysis of variance.*Omnibus F significant at p ≤ .001.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

14 Journalism & Mass Communication Quarterly 91(1)

JMCQ peer review process nonetheless improved their current manuscript (female, M = 3.93; male, M = 3.14) and their subsequent work (female, M = 3.78; male, M = 3.08).

Conclusion

This study found that authors who have submitted to JMCQ in the past five years had slightly more positive than negative attitudes toward the journal’s peer review process, but that the review process was rated less positively for helping improve authors’ cur-rent manuscripts or subsequent work. Data suggest that JMCQ’s peer review process is not seen as particularly better or worse than the processes of other mass communica-tion journals, except in two regards: as noted, JMCQ reviews are rated as somewhat less helpful, but the journal’s reviews are perceived as clearer, more polite and profes-sional, and certainly faster.

The regression analyses also show that whether an author’s previous submission to JMCQ was rejected was consistently one of the strongest predictors of satisfaction with the peer review process. Again, this finding raises troubling questions about the perceptions of peer review as a hurdle rather than a means to improve one’s scholarship.14

Rejection also related to perception of bias. Non-quantitative researchers were less likely to perceive JMCQ’s peer review process as unbiased, and non-quantitative researchers had significantly lower rates of success publishing in JMCQ, but signifi-cantly higher rates of publishing success in other mass communication journals.

Women, however, actually rated the peer review process more positively, even when the editorial decision was negative. Women who had their articles rejected were significantly more satisfied with the peer review process and more likely to credit it with a positive impact on current and subsequent research than were men who had their last article rejected.

These gender differences are consistent with previous research on women’s responses to evaluation that suggests that women internalize negative feedback (i.e., view a negative review as reflecting weaknesses in one’s manuscript) rather than blaming others (i.e., reviewers) or external factors or processes,15 are more eager to use negative evaluative feedback to improve future performance,16 and, if they receive negative feedback, actually are more likely to improve subsequent performance than are men who receive negative feedback.17

Of course, the results of this study cannot be generalized beyond the survey’s respondents. The survey focused on authors who had submitted to a single journal, and the authors were not randomly selected. Nonetheless, the data may be instructive for those who review for mass communication journals and those who make reviewer assignments. The data may also provide some context for authors submitting their work for peer review. Furthermore, while this study was not set up as a theoretical test of gender differences in response to evaluative feedback, it does suggest some avenues for future research of potentially significant gender differences beyond con-cerns of bias.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

Watson and Riffe 15

Authors’ Note

This editorial report was not peer-reviewed. However, a longer version of the survey results was reviewed and presented at a session sponsored by the Commission on the Status of Women at the AEJMC conference in Chicago in August 2012.

Declaration of Conflicting Interest

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

Notes

1. Lutz Bornmann and Hans-Dieter Daniel, “The Effectiveness of the Peer Review Process: Inter-Referee Agreement and Predictive Validity of Manuscript Refereeing at Angewandte Chemie,” Angewandte Chemie 47 (38, 2008): 7173-78; Peter M. Rothwell and Christopher N. Martyn, “Reproducibility of Peer Review in Clinical Neuroscience: Is Agreement between Reviewers Any Greater than Would Be Expected by Chance Alone?” Brain 123 (9, 2000): 1964-69.

2. Sara Schroter, Nick Black, Stephen Evans, Fiona Godlee, Lyda Osorio, and Richard Smith, “What Errors Do Peer Reviewers Detect, and Does Training Improve Their Ability to Detect Them?” Journal of the Royal Society of Medicine 101 (10, 2008): 507-14.

3. Mark Henderson, “Problems with Peer Review,” British Medical Journal 340 (2010): c1409.

4. Gwendolyn B. Emerson, Winston J. Warme, Frederic W. Wolf, James D. Heckman, Richard A. Brand, Seth S. Leopold, “Testing for the Presence of Positive-Outcome Bias in Peer Review: A Randomized Controlled Trial,” Archives of Internal Medicine 170 (21, 2010): 1934-39; David Shatz, Peer Review: A Critical Inquiry (Lanham, MD: Rowman & Littlefield, 2004).

5. Ann C. Weller, Editorial Peer Review: Its Strengths and Weaknesses (Medford, NJ: Information Today, 2001).

6. Nicholas H. Wolfinger, Mary A. Mason, and Marc Goulden, “Problems in the Pipeline: Gender, Marriage, and Fertility in the Ivory Tower,” The Journal of Higher Education 79 (5, 2008): 388-405.

7. Laura Padilla-González, Amy S. Metcalfe, Jesús F. Galaz-Fontes, Donald Fisher, and Iain Snee, “Gender Gaps in North American Research Productivity: Examining Faculty Publication Rates in Mexico, Canada, and the U.S.,” Compare: A Journal of Comparative and International Education 41 (5, 2011): 649-68; Erin Leahey, “Gender Differences in Productivity: Research Specialization as Missing Link,” Gender & Society 20 (6, 2006): 754-80; Yu Xie and Kimberlee A. Shauman, “Sex Differences in Research Productivity Revisited: New Evidence about an Old Puzzle,” American Sociological Review 63 (6, 1998): 847-70.

8. Michael Ryan, “Evaluating Scholarly Manuscripts in Journalism and Communications,” Journalism Quarterly 59 (2, 1982): 273-85.

by guest on March 4, 2014jmq.sagepub.comDownloaded from

16 Journalism & Mass Communication Quarterly 91(1)

9. Larry Z. Leslie, “Peer Review Practices of Mass Communication Scholarly Journals,” Evaluation Review 14 (2, 1990): 151-65.

10. Paula Poindexter, “What’s Right and What’s Wrong with the Reviewing Process: AEJMC Members Evaluate Peer and Tenure Review” (paper presented at the Meeting of the Association for Education in Journalism and Mass Communication, San Francisco, CA, August 2006).

11. Bobbie J. Sweitzer and David J. Cullen, “How Well Does a Journal’s Peer Review Process Function?” Journal of American Medical Association 272 (2, 1994): 152-53; Ellen J. Weber, Patricia P. Katz, Joseph F. Waeckerle, and Michael L. Callaham, “Impact of Review Quality and Acceptance on Satisfaction,” Journal of American Medical Association 287 (21, 2002): 2790-93.

12. Sweitzer and Cullen, “How Well Does a Journal’s Peer Review Process Function?”13. Poindexter, “What’s Right and What’s Wrong with the Reviewing Process”; Paula

Poindexter, “An Examination of AEJMC Member Perceptions of the Integrity of the Competitive Paper Review Process” (paper presented at the Meeting of the Association for Education in Journalism and Mass Communication, Boston, MA, August 2009); Sweitzer and Cullen, “How Well Does a Journal’s Peer Review Process Function?”; Weber et al., “Impact of Review Quality and Acceptance on Satisfaction”; Leslie, “Peer Review Practices of Mass Communication Scholarly Journals.”

14. Sweitzer and Cullen, “How Well Does a Journal’s Peer Review Process Function?”15. Angela J. Hirshy and Joseph R. Morris, “Individual Differences in Attributional Style:

The Relational Influence of Role Self-Efficacy, Self-Esteem, and Sex Role Identity,” Personality and Individual Differences 32 (2, 2002): 183-96.

16. Maria Johnson and Vicki S. Helgeson, “Sex Differences in Response to Evaluative Feedback: A Field Study,” Psychology of Women Quarterly 26 (3, 2002): 242-51.

17. Deidra J. Schleicher, Chad H. Van Iddekinge, Frederick P. Morgenson, and Michael A. Campion, “If at First You Don’t Succeed, Try, Try, Again: Understanding Race, Age, and Gender Differences in Retesting Score Improvement,” Journal of Applied Psychology 95 (4, 2010): 603-17.

by guest on March 4, 2014jmq.sagepub.comDownloaded from