Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings: Exploring...

13
Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings: Exploring Creativity Differences across Worldwide Agency Creatives and Managers Sheila Sasser, Eastern Michigan University, USA Scott Koslow , Waitako Management School, New Zealand Mark Kilgour, Waitako Management School, New Zealand 1 Introduction Research often emphasizes that creativity is the most critical element for advertising effectiveness in the marketplace (Ang et al., 2007; West et al., 2008; El-Murad and West, 2004; Smith et al., 2007). Given its importance it is not surprising that there has been an exponential growth in creativity research (Sasser and Koslow, 2008a). This research includes influences on audience members’ processing (Goldenberg and Mazursky, 2008; Ang et al., 2007; Smith et al., 2007; Pieters et al., 1999), creative template techniques (Goldenberg et al., Solomon, 1999), remote conveying (Rossiter, 2008) or other approaches (West et al., 2008; Kover, 1995). Social environment impacts on advertising creativity (Li et al., 2008) and client organizational influences on creativity (Sasser and Koslow, 2008a; Koslow et al., 2006) have also been studied. Such research progress has enabled a renewed emphasis on empirical creativity studies, although many researchers must still rely heavily on self reported data, given the nature of the industry and confidentiality agreements. This often leads to questions of whether or not any bias is evident in such data due to the nature of self report and disclosure. For example, researchers query creatives and account managers to provide estimations of campaigns that they personally worked on along with their other team members collaboratively. Originality and strategy are assessed based upon the responses and analyzed along with other independent variables collected from the same individuals. A typical concern is the potential for common methods variance, whether warranted or not, so this chapter probes into such a scenario to provide insight for researchers. Independent of this effect, another issue may arise as to whether the perspectives held by creatives or account managers slant the findings. The goal of this chapter is to delve into such measurement concerns, triangulating two studies using creatives and account managers. First, self-

Transcript of Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings: Exploring...

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings: Exploring Creativity Differences across Worldwide Agency Creatives and Managers

Sheila Sasser, Eastern Michigan University, USA Scott Koslow , Waitako Management School, New Zealand Mark Kilgour, Waitako Management School, New Zealand

1 Introduction

Research often emphasizes that creativity is the most critical element for advertising effectiveness in the marketplace (Ang et al., 2007; West et al., 2008; El-Murad and West, 2004; Smith et al., 2007). Given its importance it is not surprising that there has been an exponential growth in creativity research (Sasser and Koslow, 2008a). This research includes influences on audience members’ processing (Goldenberg and Mazursky, 2008; Ang et al., 2007; Smith et al., 2007; Pieters et al., 1999), creative template techniques (Goldenberg et al., Solomon, 1999), remote conveying (Rossiter, 2008) or other approaches (West et al., 2008; Kover, 1995). Social environment impacts on advertising creativity (Li et al., 2008) and client organizational influences on creativity (Sasser and Koslow, 2008a; Koslow et al., 2006) have also been studied.

Such research progress has enabled a renewed emphasis on empirical

creativity studies, although many researchers must still rely heavily on self reported data, given the nature of the industry and confidentiality agreements. This often leads to questions of whether or not any bias is evident in such data due to the nature of self report and disclosure. For example, researchers query creatives and account managers to provide estimations of campaigns that they personally worked on along with their other team members collaboratively. Originality and strategy are assessed based upon the responses and analyzed along with other independent variables collected from the same individuals. A typical concern is the potential for common methods variance, whether warranted or not, so this chapter probes into such a scenario to provide insight for researchers. Independent of this effect, another issue may arise as to whether the perspectives held by creatives or account managers slant the findings.

The goal of this chapter is to delve into such measurement concerns,

triangulating two studies using creatives and account managers. First, self-

344 Sasser, Koslow and Kilgour

reported assessments of creatives and account managers are compared with outside, independent evaluators. Specifically, the assessments of seasoned creatives of their own work’s originality are compared to the assessments of outside independent judges. Likewise, seasoned account executives also assess their own work for being on-strategy, and these self-assessments are compared to those of outside independent judges. Overall, there is a high level of agreement between creatives’ self-assessments of originality and those of independent judges. Similarly, there is also high agreement between account executives’ self-assessments of strategy and those of independent judges. The findings support the contention that common methods variance is not a serious concern for respondents reporting on assessments or judgements in their own domain of expertise.

Second, an analysis of the self report survey data collected from an ongoing

global study named AdCrisp© is prepared only using stratified groups with high levels of expertise in their particular domain. The point is to assess if this has any influence on results by taking a primary slice of the data set for measurement purposes and comparison. For example, only seasoned creatives are used to predict originality and only experienced account executives are polled to predict strategy. Five of the major effects of primary interest in this field of research inquiry were analyzed. Overall, earlier findings were supported under scrutiny and the earlier effects still held under these conditions. Although more research is needed, it is concluded that self-assessments can provide solid measurements as a basis for study. This is critical because without professional and practitioner self-assessments, scholars would be solely reliant on student data that may lack external validity and be difficult to replicate in the real world.

2 Theoretical Background

Before examining common methods bias issues, a brief review of this area of research is required. Increasingly, researchers have been advancing what we know about advertising creativity. The originality aspect of creative advertising breaks through clutter (Pieters et al., 2002) allowing creative advertisements to elicit more consumer attention (Till and Baack, 2005). Attention leads to deeper processing, enhanced attitudes and higher persuasion. Originality draws attention, while appropriateness prompts engagement and strategy elicits action. In addition to customer views of advertising creativity, practitioner perspectives have also been examined (West et al., 2008.). Engagement in the creative process is evolving to include consumers, clients, and agency executives (Sasser 2008, Phelps et al., 2004) as digital environments streamline interaction and activate social media platforms.

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 345

It is essential to fully understand the agency and client co-creation process as it is now quickly expanding to include other stakeholders such as consumers. This may further complicate the elusive task of identifying latent or obscure factors impacting judgments, self-report assessments, and possibly agreements of what constitutes creativity. As polling and voting for winners coupled with immediate feedback on creative alternatives become increasingly common, traditional measures of creativity and measurement may be impacted. A creativity CCI model uses PLS structural equation modelling to examine relationships and factors between the agency and client as co-creators across concept dimensions (Sasser et al., 2008). Surprisingly, clients often approve risky novel interactive digital media campaigns with far less testing, measurement, and substantiation than they require for highly creative breakthrough campaigns, yet this standard does not seem to apply for pushing the edge of the creativity envelope (Sasser et al., 2007). This sets a dangerous precedent when entering the new era of co-creation, as agency-client relationships utilize and prompt consumer involvement and feedback. What future role will consumer stakeholders play in agency-client creativity? Thus, it may be useful to take a look at some of the historical creativity issues.

When clients work with advertising agencies, creativity is often the primary

trait sought (Griffin et al., 1999). At the onset of a relationship, the most crucial issue is typically the creativity of the agency (Waller, 2004: Henke, 1995) as evidenced across new business pitches. When the business is new and subject to a new start or greater risk-taking, there may be a bit more freedom to explore. However, as the relationship ages, strategy and appropriateness may supplant raw originality and creativity, due to numerous factors. Issues like performance, communications, and trust also emerge and may dominate the agency client relationship (Davies and Prince, 2005; Waller, 2004; Henke, 1995; West and Paliwoda, 1995), but creativity is still critical. Political gamesmanship is often deemed negative. However, properly directed politics may actually facilitate a client decision when there is insecurity in the decision making process, or when there is a highly creative “riskier” campaign (Sasser and Koslow, 2008b) that may deliver a needed outcome and there is nothing left to lose.

Across a number of articles, results from creativity researchers have

produced a series of similar findings. For example, (Koslow et al., 2003) found that creatives believe that originality is the most important factor while account managers are more concerned about strategy. This research actually resulted in a useful formula for creativity that many scholars previously thought impossible. The client impact on creativity was examined in the next article that clearly illustrated the power of their willingness to explore new ideas (Koslow, Sasser and Riordan, 2006). This work also showed that when clients are willing

346 Sasser, Koslow and Kilgour

to accept creative ideas, they are also more likely to champion or seek out a genuinely creative campaign. And they will reject work if it is not creative enough. Although campaigns expected to be copy-tested are not less creative, a copy-test decreases the likelihood of a client using a campaign. Finally, agencies have ways of swaying clients to accept highly creative work by playing organizational politics (Sasser and Koslow, 2008b). West (1999) and West and Berthon (1997) also note that clients are often afraid to take creative risks.

Despite all of these advances in understanding the effect of creativity,

measurement limitations still are a concern. Much of the research uses either student samples which has obvious limitations on external validity (Kilgour and Koslow 2009a), or practitioner self-assessments. It would be ideal to collect professional responses and then assess the work independently. However, collecting such desirable data has proven almost impossible in field settings due to confidentiality concerns by respondents. For example, AdCrisp© and EuroCrisp© yield ongoing international data sets comprised of nearly 2,000 creative campaigns that have been collected across global markets. These studies are based on major worldwide client accounts with substantial billings. Such competitive global clients resist many open research designs that might compromise their strategic position. Li et al (2008) is one of the few studies that combines self-assessment with external assessments, by focusing on campaigns that have been submitted to awards. The Li et al (2008) study is then, by design, a censored population subject to other complications and stipulations. It is essential to better understand self-assessments and how to mitigate any problems they introduce, given the other obstacles to data collection and measurement.

Although self-reported dependent and independent measures may lead to

potential common methods bias (Podsakoff et al., 2003), some scholars (Spector, 2006) contend that concerns over the bias are greatly overrated. (Podsakoff et al., 2003) also note that when common methods bias is present, it usually has an affect on all measures and not just a selected few. That is, methods bias is rarely perceived as interacting with a limited set of items in a questionnaire. “The existing empirical evidence suggests that these interactions may be fairly rare in research settings, and even when they do occur, they are generally weak,” (Podsakoff 2003, p. 897). Thus it is rare that an interaction term between two measures could be spuriously produced. Prior work using the AdCrisp© and EuroCrisp© data set has generally used interactions, but this research which did not use interactions, still reports consistent findings generally the same as previously published results. After replication and testing across a variety of measures including structural equation modelling, the findings exhibit consistency and hold up under intense scrutiny.

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 347

Concurrently in the research design stage, (Podsakoff et al., 2003) recommendations to prevent methods bias by improving research procedures: 1) separating measurements, 2) protecting anonymity, 3) counterbalancing question order and 4) improving scale items were followed. First, elicitation of the dependent and independent measures was separated by another section of the questionnaire that asked for factual classifications of the campaigns, as a cross check. This is appropriate because it allows for previously recalled information to be cleared from short term memory before proceeding to another set of related questions. Second, anonymity was strongly protected and subjects were informed that the study was confidential in advance and again at the end of the response period. Third, question order was also counterbalanced. Finally, to reduce ambiguity of items, the AdCrisp© and EuroCrisp© questionnaires were subjected to two stages of pretesting and revisions before the final version was produced. Multiple language translations of the instrument also follow this process. Such methods hopefully limit common methods variance, as it is continually scrutinized and checked.

3 Study 1: Actor-Observer Differences in Creativity Assessments

Some recent findings suggest validation tests to determine if research findings are susceptible to CMV. Kilgour and Koslow (2009b) compare self-reports to the evaluation of advertising agency creatives and account executives. This work focuses on other issues regarding advertising creativity, but draws on some issues relevant to CMV. A brief summary of this research is offered here, but emphasis is made on the CMV issues.

Subjects were asked to develop creative campaigns based on given criteria.

There were 49 creatives and 56 account executives. Each developed three campaigns for a hypothetical brand of household product. Several other manipulations were used that are not relevant to the work here.

Respondents evaluated their own work, and a jury of four external judges

also evaluate the same work. The items used were five items for originality and five additional items for strategy. These items have been previously used in earlier studies (e.g., Koslow et al., 2003; Koslow, Sasser et al., 2006).

O-mode factor analysis was used to show the agreement among judges and

the respondent. This mode of factor analysis is similar to traditional R-mode factor analysis, in which items are assessed for how they co-vary. The difference is that O-mode analysis focuses on several judges (rather than items), to see how well different judges’ assessments co-vary. Ideally, all judges should

348 Sasser, Koslow and Kilgour

load on a single factor similar to how related items load on a single factor. Readers familiar with Cronbach’s coefficient alpha should note that alpha can be derived from the ratios of variances from this analysis.

Kilgour and Koslow (2009b) aimed to show that judges gave scores similar

to highly skilled professionals. Seasoned creatives were held as the gold standard on assessments of advertisements’ originality, while seasoned account executives would serve as the standard for assessing how on-strategy campaigns were. Most of the creative pool of subjects and the majority of the account executives subjects were considered seasoned and only these were used in the analysis. Thus, there would be two O-mode analyses: one for creatives assessing originality and the other for account executives assessing strategy.

The loadings for seasoned creatives are listed in Table 1. As shown, all four

judges and the self assessments load on a signal factor, with loadings all above .6. Likewise in Table 2, the assessment for the four judges plus the seasoned account executive’s self assessments are factor analysed. Again, all loadings are above .6.

Table 1: O-Mode Factor Analysis: Originality Assessments by Creatives and Judges

Loading Judge A .660 Judge C .667 Judge E .727 Judge R .639 Self .638

Table 2: O-Mode Factor Analysis: Strategy Assessments by Account Executives and Judges

Loading Judge A .818 Judge C .749 Judge E .748 Judge R .721 Self .630

Findings show that external judges’ evaluations closely match respondent

self reports in several key situations. In measurements of campaign originality, responses of seasoned creative staff match those of external judges, so there appears to be agreement upon campaign originality. When measuring how

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 349

appropriate or on-strategy campaigns are, the responses of seasoned account executive staff also match those of external judges. Thus, researchers may be fairly confident that self-report data drawn from seasoned creatives and account executives does not appear to be affected by CMV, as shown in the study 1 exercise.

4 Hypotheses Development

Given that creatives and account executives can be good judges of their own work, the next question is whether just focusing on these judgements changes results found in previous studies. Thus, this chapter will now review and hopefully replicate several key effects previously found.

There is a case to be made that intrinsic motivation impacts originality in

campaign creativity particularly at senior creative levels. Client openness or willingness to explore is also postulated to impact originality in campaign creativity, and appropriateness in campaign strategy. Consumer research is thought to impact appropriateness in campaign strategy. Politics is identified as possibly having an effect on appropriateness in campaign strategy creativity. Therefore the following hypothesis can be offered:

H1: Intrinsic motivation increases the “originality” of a campaign. H2: Client openness to explore increases the “originality” of a campaign. H3: Client openness to explore increases how “on-strategy” a campaign is. H4: Consumer research usage increases how “on-strategy” a campaign is. H5: Politics decreases how “on-strategy” a campaign is.

The question for researchers in this particular study is whether or not restricting research to these measures and respondents indeed alters the findings. When the focus is solely placed on predicting how “on-strategy” or “appropriate” using senior account managers, do previously reported findings change? Alternatively, if only seasoned creatives are used to predict “originality,” do previously reported findings change?

350 Sasser, Koslow and Kilgour

5 Study 2: Predicting Creativity and Comparing Self Report Data

Consistent with other creativity studies (West et al., 2008; Li et al., 2008), this study was based on questionnaires collected from agency executives. This research was part of a larger advertising creativity study that is now a global survey in multiple languages. Called the Advertising Creativity and Integration Strategy Project or AdCrisp© and EuroCrisp©, this on-going study’s data set has been used in a variety of other articles (e.g., Sasser et al., 2007; Koslow et al., 2006). Over 400 respondents from different advertising agency offices reported on up to three of their most recent campaigns, for a total sample of 1188 campaigns out of the larger data set. Views were solicited from creative, media, research and account and other executives via questionnaires.

Study 2 takes a subset of creatives and account executives and replicates

prior regression analyses to see if findings hold. From study 1, it was shown that seasoned creatives and account executives have fairly objective self assessments. In study 2, similar subjects’ self assessments were used to predict originality and strategy using a variety of independent variables.

5.1 Data Collection

The stratified sampling frame targeted the 30 top worldwide advertising agencies across the top global public holding companies. These agencies serviced representative categories of clients in packaged goods, foods, entertainment, services, automotive, retail, durables, military, and manufacturers, so as to not be biased to any one type of industry or area. Given the worldwide designation, the agencies offered fully integrated marketing communications, traditional Agency of Record creative services, planning, strategy, media, direct, channel, customer relationship, interactive digital, data base, and public relations skills.

Invitations were extended by area to agency staff who had been involved in

at least three major recent client campaigns. Due to questionnaire length, a personal intercept method was employed so surveys were distributed in person to individuals and groups of agency employees during the normal business day with permission of human resources, public relations contacts and appropriate managers.

Respondents were ranked based on demographic profiling on the instrument

to determine qualifications, experience, years in the business and areas that they had worked in at the agency. Only those meeting the same standard of seasoned creatives and account executives used in study 1 were used in this analysis.

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 351

Thus, 85 senior account managers were used in this analysis plus an entirely separate distinctive set of 85 senior creatives for matching. The account managers reported on 247 campaigns and the creatives reported on 255 The AdCrisp© and EuroCrisp© questionnaires use self-reports. A major advantage of self-report is that only those with a close familiarity with a campaign will understand the constraints placed on it—something of which an external judge is rarely aware. Only if one knows the detailed case history could one also reasonably judge such a campaign’s creativity. Amabile (1996) also notes that self-reports of creative behavior are good measures when the respondents frequently have their work evaluated whether by copytesting, or creative director peer scrutiny, as is common in agencies. This results in some benchmarking norms that are understood for campaign development.

5.2 Measures

Thirteen independent variables were used in this analysis. The constructs are comprised of the major constructs resulting from prior factor analysis research. Based on qualitative interviews from another phase of this research, a questionnaire was designed upon the words and phrases used by advertising agency employees in focus groups and interviews. Measures include:

1. Use of copytesting 2. Intrinsic motivation 3. Client sophistication 4. Client willingness to explore 5. Time pressure 6. Politics 7. Budget tightness 8. Client’s brief contains strategy 9. Availability of consumer research 10. Client’s position in their hierarchy 11. Supportiveness of the agency’s culture for good creative work 12. Agency structure 13. Client’s decision apprehension

These scales were measures with 40 items, most with a minimum of 3 items per scale. The variance explained was 69% with all items loading as expected. Due to the large number of items relative to the number of observations, some instability was observed, but this was minor at best and did not impact results.

352 Sasser, Koslow and Kilgour

Most items loaded above .7, with all but four above .6. Three items loaded greater than .5 (.58, .57 and .50), and one at .47. Some correlation among items was observed, as well, but it was moderate. Cronbach’s alphas were mostly above .7 and ranged from .62 to .82. All commonalities were above .6 except for one at .58. Given this analysis was a replication of prior analyses where the items fit well, this was considered acceptable for the current exercise in this chapter. The analysis could have been cleaned up by dropping some items, but this was again rejected to be more conservative since dropping problematic items did not really change any later analyses.

The dependent variables of originality and strategy were also analysed for

internal consistency using factor analysis. Four items for strategy were used and five for originality. The variance explained was 74% and with two factors. All items loaded as expected with high loadings above .7. Cronbach’s alphas were .84 and .93 respectively.

5.3 Findings

Using the SAS system, two stepwise regression analyses were conducted. The first process involved the stepwise procedure for originality as a dependent variable and the second analysis utilized the stepwise procedure modelling strategy as a dependent variable. All variables were scaled and centered prior to analysis. The best fitting model for originality explained 56% of the variance and the best one for strategy explained 37%. No interaction terms were used. Figure 1 graphs the results for those variables significant in either one or the other models. Four variables were not significant in either model. These were agency culture, budget tightness, copytesting and politics. However, politics had a marginal influence on strategy, as it often does under certain conditions.

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 353

Figure 1: Parameters predicting originality and strategy from stepwise regression

H1 is accepted based upon the model, as intrinsic motivation has the highest impact on originality in campaign creativity with high values. H2 is also accepted since openness to explore has the second highest impact (refer to Table 4) on originality. H3 is accepted due to the finding in the model that client openness to explore also has an impact (refer to Table 5) on how on-strategy a campaign is. H4 is accepted in that consumer research usage has a major impact on how strategic a campaign is. However, H5 is marginally accepted in that politics impacts appropriateness of strategy in campaign creativity, (refer to Table 5) but not to the extreme level of the first two independent variables.

Information from the stepwise regression output for two models conveys: 1)

originality as predicted by these measures using only senior creatives, and 2) appropriateness as predicted by these measures using only senior account managers. Overall, there is strong support for the main findings of prior self reported survey research. First, the best single predictor of originality continues to be intrinsic motivation. The best predictors of appropriateness are use of consumer research and client openness. Sample stratification may be very useful

-0,2

-0,1

0

0,1

0,2

0,3

0,4

0,5

-0,2 -0,1 0 0,1 0,2 0,3 0,4 0,5

Orig

inal

ity

Strategy

What Predicts Originality and Strategy: Key ParametersClient willingness to explore

Client decision apprehension

Client’s brief contains strategy

Intrinsic motivation

Time pressure Consumer research

Client sophistication Client’s position in hierarchy

Agency Structure

354 Sasser, Koslow and Kilgour

if researchers are facing hurdles or experiencing common methods bias issues, particularly with global samples of self report data.

6 Discussion: Addressing Potential Common Methods Bias

Creativity researchers have previously found that extrinsic motivation is not as powerful as intrinsic motivation (Amabile, 1996). This study supports the notion that intrinsic motivation is critical for inspiring creativity. Highly creative original and strategically appropriate advertising is indeed a function of intrinsic motivation, client openness to new ideas and a willingness to explore (Sasser and Koslow 2008b; Koslow, et al.,2006). Or in the case of politics and research, it may be a measure of how desperate clients are to achieve their goals (West and Berthon, 1997). This validation of self-report research data confirms such earlier research findings and replicates the results of previous researchers in the field of advertising creativity. 7 References Amabile, T. (1996), Creativity in Context. Boulder, Colorado: Westview Press Ang, S., Lee, Y. & Leong, S. (2007). The Ad Creativity Cube: Conceptualization and Initial

Validation. Journal of the Academy of Marketing Science, 35 (2), 220-232. Davies, M. & Prince, M. (2005). Dynamics of Trust between Clients and Their Advertising

Agencies: Advances in Performance Theory. Academy of Marketing Science Review, 11, 1-32. Goldenberg, J. & Mazursky, D. (2008). When Deep Structure Surface: Repetitions that Can

Repeatedly Surprise. Journal of Advertising, 37 (4), 21–34. Goldenberg, J. & Mazursky, D. & Solomon, S. (1999). The Fundamental Templates of Quality Ads.

Marketing Science, 18 (3), 333-351. Griffin, G. (2008). From Performance to Mastery: Developmental Models of the Creative Process.

Journal of Advertising, 37 (4), 99-113. Griffin, T., D., McArthur, T., Yamaki & Hidalgo, P. (1998). Ad Agencies’ Performance and Role in

Providing Communication Services in Chile, Japan and the United States. Journal of Advertising Research, 38 (September/October), 65-75.

Henke, L. (1995). A Longitudinal Analysis of the Ad Agency-Client Relationship: Predictors of an Agency Switch. Journal of Advertising Research, 35 (March/April), 24-30.

Kilgour, M. & Koslow, S. (2009a). Why and How Do Creative Thinking Techniques Work?: Trading Off Originality and Appropriateness To Make More Creative Advertising. Journal of the Academy of Marketing Science, 37 (3), 298-309.

Kilgour, M. & Koslow, S. (2009b). If Creative Thinking Techniques Are So Great, Why Aren’t They Used More? Actor-Observer Differences. Informs Marketing Science Conference, Ann Arbor, Michigan, USA, 4-6 June, 27-29.

Koslow, S., Sasser, S. & Riordan, E. (2003). What Is Creative to Whom and Why? Perceptions in Advertising Agencies. Journal of Advertising Research, 43 (1) (March), 96-110.

Assessing the Quality of Self-reported Measures and the Reliability of Empirical Findings 355

Koslow, S., Sasser, S. & Riordan, E. (2006). Do Marketers Get the Advertising They Need or the Advertising They Deserve?: Agency Views of How Clients Influence Creativity. Journal of Advertising, 35 (3), Fall, 85-105.

Kover, A. (1995). Copywriters' Implicit Theories of Communication: An Exploration. Journal of Consumer Research, 21 (March), 596-611.

Li, H., Dou, W., Wang, G., & Zhou, N. (2008) The Effect of Agency Creativity on Campaign Outcome: The Moderating Role of Market Conditions. Journal of Advertising, 37 (4), 109-120.

Phelps, J., Lewis R., Mobilio, L., Perry, D. & Raman, N. (2004). Viral Marketing or Electronic Word-of-Mouth Advertising: Examining Consumer Responses and Motivations to Pass Along Email. Journal of Advertising Research, 44 (4) December: 333-348.

Pieters, R., Rosbergen, E. & Wedel, M. (1999). Visual Attention to Repeated Print Advertising: A Test of Scanpath Theory. Journal of Marketing Research, 36 (November), 424-438.

Pieters, R., Warlop, L. & Wedel, M. (2002). Breaking through the Clutter: Benefits of Advertisement Originality and Familiarity for Brand Attention and Memory. Management Science, 48 (6), 765-781.

Podsakoff, P., Mackenzie, S., Lee, J. Y. & Podsakoff, N. (2003). Common Method Bias in Behavioral Research: A Critical Review of the Literature and Recommended Remedies. Journal of Applied Psychology, 88 (5), 879-903.

Rossiter, J. (2008). The Nature and Purpose of ‘Creativity’ in an Ad. Journal of Advertising, 37 (4), 139-144.

Spector, P. (2006). Method Variance in Organizational Research: Truth or Urban Legend? Organizational Research Methods, 9 (2), 221-232.

Sasser, S. (2008). Creating Passion to Engage versus Enrage Consumer Co-Creators with Agency Co-Conspirators: Unleashing Creativity. Journal of Consumer Marketing, 25(3), 183-186.

Sasser, S. & Koslow, S. (2008a). Desperately Seeking Advertising Creativity. Journal of Advertising, 37 (4), 1-10.

Sasser, S. & Koslow, S. (2008b). The Creative Advertising Development Process: Is Organizational Politics a Recipe for Disaster or a Dysfunctional Antidote? New Trends in Advertising Research, Chapter 5, 103-119, Lisbon, Portugal: Silabo.

Sasser, S., Merz R. & Koslow, S. (2008). A Global Creativity Model Emerges: Evolving a Theoretical and Empirical Framework for the CCI Campaign Creativity Index. International Conference on Research in Advertising, European Advertising Academy, June 27-28, Antwerp, Belgium.

Sasser, S. , Koslow, S. & Riordan, E. (2007). Creative and Interactive Media Use by Agencies: Engaging an IMC Media Palette for Implementing Advertising Campaigns. Journal of Advertising Research, 47 (3), 237-256.

Smith, R., MacKenzie, S., Yang, X., Buchholz, L. & Darley, W. (2007). Modelling the Determinants and Effects of Creativity in Advertising. Marketing Science, 26 (6), 819-833.

Waller, D. (2004). Developing an Account-Management Lifecycle for Advertising Agency-Client Relationships. Marketing Intelligence & Planning, 22 (1), 95-112.

West, D. (1999). 360° of Creative Risk. Journal of Advertising Research, 39 (January/February), 39-50.

West, D. & Berthon, P. (1997). Antecedents of Risk-Taking Behavior by Advertisiers: Empirical Evidence and Management Implications. Journal of Advertising Research, 37 (September/October), 27-40.

West, D., Kover, A. & Caruana, A. (2008). Practitioner and Customer Views of Advertising Creativity: Same Concept, Different Meaning? Journal of Advertising, 37 (4), 35-45.

West, C. & Paliwoda, S. (1996). Advertising Client-Agency Relationships: The Decision Structure of Clients. European Journal of Marketing, 30 (8), 22-39.