Crowdsourcing contests: understanding the effect of competitors’ participation history on their...

14
This article was downloaded by: [Hanieh Javadi Khasraghi] On: 22 September 2014, At: 16:20 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Behaviour & Information Technology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tbit20 Crowdsourcing contests: understanding the effect of competitors’ participation history on their performance Hanieh Javadi Khasraghi a & Abdollah Aghaie a a Department of Industrial Engineering, K.N. Toosi University of Technology, Tehran, Islamic Republic of Iran Accepted author version posted online: 14 Jan 2014.Published online: 06 Mar 2014. To cite this article: Hanieh Javadi Khasraghi & Abdollah Aghaie (2014): Crowdsourcing contests: understanding the effect of competitors’ participation history on their performance, Behaviour & Information Technology, DOI: 10.1080/0144929X.2014.883551 To link to this article: http://dx.doi.org/10.1080/0144929X.2014.883551 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions

Transcript of Crowdsourcing contests: understanding the effect of competitors’ participation history on their...

This article was downloaded by: [Hanieh Javadi Khasraghi]On: 22 September 2014, At: 16:20Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House,37-41 Mortimer Street, London W1T 3JH, UK

Behaviour & Information TechnologyPublication details, including instructions for authors and subscription information:http://www.tandfonline.com/loi/tbit20

Crowdsourcing contests: understanding the effectof competitors’ participation history on theirperformanceHanieh Javadi Khasraghia & Abdollah Aghaiea

a Department of Industrial Engineering, K.N. Toosi University of Technology, Tehran, IslamicRepublic of IranAccepted author version posted online: 14 Jan 2014.Published online: 06 Mar 2014.

To cite this article: Hanieh Javadi Khasraghi & Abdollah Aghaie (2014): Crowdsourcing contests: understandingthe effect of competitors’ participation history on their performance, Behaviour & Information Technology, DOI:10.1080/0144929X.2014.883551

To link to this article: http://dx.doi.org/10.1080/0144929X.2014.883551

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) containedin the publications on our platform. However, Taylor & Francis, our agents, and our licensors make norepresentations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of theContent. Any opinions and views expressed in this publication are the opinions and views of the authors, andare not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon andshould be independently verified with primary sources of information. Taylor and Francis shall not be liable forany losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoeveror howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use ofthe Content.

This article may be used for research, teaching, and private study purposes. Any substantial or systematicreproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in anyform to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Behaviour & Information Technology, 2014http://dx.doi.org/10.1080/0144929X.2014.883551

Crowdsourcing contests: understanding the effect of competitors’ participation historyon their performance

Hanieh Javadi Khasraghi∗ and Abdollah Aghaie

Department of Industrial Engineering, K.N. Toosi University of Technology, Tehran, Islamic Republic of Iran

(Received 24 May 2013; accepted 11 January 2014 )

Crowdsourcing contests have become increasingly important and prevalent with the ubiquity of the Internet. Designing effi-cient crowdsourcing contests is not possible without the deep understanding of the factors affecting individuals’ continuousparticipation and their performance. Prior studies have mainly focused on identifying the effect of task-specific, environment-specific, organisation-specific, and individual-specific factors on individuals’ performance in crowdsourcing contests. Andto our knowledge, there are no or very few studies on evaluating the effect of individuals’ participation history on theirperformance. This paper aims to address this research gap using a data set from TopCoder. This study derives competitors’participation history factors, such as participation frequency, participation recency, winning frequency, winning recency,tenure, and last performance to construct models depicting effects of these factors on competitors’ performance in onlinecrowdsourcing contests. The research findings demonstrate that most of competitors’ participation history factors have signif-icant effect on their performance. This paper also indicates that competitors’ participation frequency and winning frequencymoderate the relationship between last performance and performance, and relationship between tenure and performancepositively. On the other hand, individuals’ participation recency and winning recency moderate relationship between lastperformance and performance negatively, but have no significant effect on the relationship between tenure and performance.

Keywords: crowdsourcing contests; participation history; competitors’ performance; participation frequency; participationrecency; winning frequency; winning recency; last performance; tenure; TopCoder

1. IntroductionWeb2 and the evolving vision of Web3 have a significanteffect on proliferation and facilitation of knowledge shar-ing, interoperability, user-centred design, collaboration onthe World Wide Web, and crowd-centred services. Newconcept of Web is the intuition that drives crowdsourc-ing, crowd servicing, and crowd computing. Today, manycompanies increasingly rely on intelligence of crowd ofpeople, instead of an employee or a machine, to have theirjobs done. Acquiring the experiences and efforts of largenumbers of crowd is known as ‘crowdsourcing’ and it hasbeen used effectively in a number of research and prac-tices (Brabham 2008). In the literature, crowdsourcing isdiscussed as an effective practice for integrating crowdsof people into company’s tasks. Through crowdsourc-ing, companies tend to access knowledge and experiencesof large amount of people via Internet-based platforms(Brabham 2008, Vukovic 2009, Davis 2011, Bongard et al.2012). There are many definitions of crowdsourcing inthe literature. Estellés-Arolas and González-Ladrón-de-Guevara analysed existing definitions of crowdsourcingto establish the basic characteristics of it. According totheir definition, each crowdsourcing practice has the fol-lowing characteristics: clearly defined crowd, a task with

a clear goal, clearly defined reward for the crowd, clearlyidentified crowdsourcer, clearly defined compensation forcrowdsourcer, online assigned process of participative type,and using Internet and an open call (Estellés-arolas andGonzález-ladrón-de-guevara 2012). Some crowdsourcingsites, such as Amazon’s Mechanical Turk, allow an individ-ual to be the only provider of the solution (Yang et al. 2008,Ipeirotis 2010, Fort et al. 2011, Liu and Chen 2012), othercrowdsourcing platforms, such as TopCoder, are structuredas contests to allow more people to provide solutions. Incrowdsourcing contests, any user can submit a solution tothe task, but the participant who has provided a solutionwith the best quality is awarded (DiPalantino and Vojnovic2009, Archak 2010, Chawla, et al. 2012, Gao et al. 2012,Liu and Chen 2012). In crowdsourcing contests, the overallquality of the solutions will increase, since every user whosubmits a solution expends effort regardless of whether ornot he/she wins (Gao et al. 2012, Liu and Chen 2012).In this paper, ‘competitors’ refer to any individual whoparticipates in crowdsourcing contests. In crowdsourcingcontests, a requester (also known as employer or crowd-sourcer) submits a task request (also known as problem) ina crowdsourcing platform and provides task requirements,due date for task completion, compensation for completed

∗Corresponding author. Email: [email protected] article was originally published with errors. This version has been corrected. Please see erratum http://dx.doi.org/10.1080/0144929X.2014.914772.

© 2014 Taylor & Francis

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

2 H.J. Khasraghi and A. Aghaie

Task-specific factors

Individual-specific factors

Environment-specific factors

Organization-specific factors

Number of submissions

Average qualityof submissions

Quality ofwinningsubmission

Crowdsourcingperformance

Figure 1. Effective factors on crowdsourcing performance.

tasks, etc. Once a requester defined the task requirements,the task is posted as a competition. Competitors (alsoknown as workers or crowdworkers) choose appropriateand desired task to work on. The solutions for tasks aresubmitted by competitors in the crowdsourcing platform;the participant who has provided a solution with superiorquality is awarded. Crowdsourcing platform acts like aninterface between requesters and competitors, ensuring thesuccessful completion of tasks and payment process. Theimmense potential of crowdsourcing contests in businessand the need for designing efficient crowdsourcing con-tests have urged scholars to identify the factors influencingcompetitors’ performance and their sustained participation(Walter and Back 2011, Sun et al. 2012). In other words,since the success of crowdsourcing contests relies heavilyon efficient and continuous participation of individuals, itis necessary to identify influencing factors on the competi-tors’ performance and their loyalty. Prior studies suggestmeasuring the performance of crowdsourcing contests bythe number of submissions, quality of submissions, unique-ness of solutions (Connolly et al. 1990), amount of attractedsolvers (Yang et al. 2009), or solely the quality of winningsolution (Girotra et al. 2010). Walter and Back excludeuniqueness of solutions as well as the quality of winningsolution, since these are mainly subjective factors. Theysuggest number of submissions and average quality of sub-missions as good proxy measures for crowdsourcing contestperformance. Figure 1 illustrates some of these measuresthat are represented in the literature.

As shown in Figure 1, the factors associated with crowd-sourcing performance can be generally classified into fourgroups: task-specific, individual-specific, environment-specific, and organisation-specific factors. Regarding com-petitors’ participation history, prior studies have eitherfocused on the effect of winning experience on participants’subsequent participation (Yang et al. 2008) or focused onthe effect of social learning process of individuals on theirsustained contribution (Sun et al. 2012). There is a lack ofcomprehensive study, which specifically evaluates the effectof individuals’ participation history on their performance inonline crowdsourcing contests. The present study exceedsprior studies by addressing this research issue. This paper

contributes to two models that show which participation his-tory factors affect competitors’ performance. First model isconstructed to evaluate the effect of competitors’ partici-pation frequency, participation recency, last performance,and tenure as main factors on their performance in crowd-sourcing contests. The second model is constructed todepict interaction effects of these factors on competitors’performance. This paper also contributes to two othermodels in which the significant effect of competitors’ win-ning frequency and winning recency on performance, andthe moderating effect of winning frequency and winningrecency on the relationship between last performance andperformance and relationship between tenure and perfor-mance are evaluated. The rest of this paper is organisedas follows. First, a literature review on influential factorson crowdsourcing performance is discussed. Second, theo-retical background and hypotheses are developed. Third,research methodology and data analysis are introduced.Fourth, the implications for theory and practice and lim-itations are discussed. Finally, the conclusion and futureresearch directions are presented.

2. Literature reviewThis paper, according to Walter and Back’s study oncrowdsourcing contests, supposes number of submissionsand average quality of submissions as proxy measures ofcrowdsourcing performance (Walter and Back 2009). Fur-thermore, this study measures competitors’ performance bythe quality of their submissions. Therefore, an improvementin the average performance of competitors will increasecrowdsourcing performance. Some recent studies of empir-ical data have analysed the effect of different factors on com-petitors’ performance and consequently on crowdsourcingperformance. As mentioned in the previous section, thesefactors could be grouped into task-specific, environment-specific, individual-specific, and organisation-specific fac-tors. In task-specific group, researchers have demonstratedthe significant effect of reward, task specificity, contestduration for the task, and task type on crowdsourcing perfor-mance (Yang et al. 2008, 2009, DiPalantino and Vojnovic2009, Walter and Back 2009, Archak 2010, Liu and Chen2012). Reward and contest duration for the tasks have sig-nificant positive effect, while task complexity has significantnegative effect on number of submitted solution for the con-test. As given in Table 1, from task-specific factors, the onlyfactor that has significant effect on average quality of sub-missions is task type. Individuals’ intrinsic, extrinsic, andsocial motives that have been introduced in many studiesas effective factors on individuals’ participation and perfor-mance in the contests (Daugherty et al. 2005, Chiu et al.2006, Nov et al. 2009, Rogstadius et al. 2011) are groupedinto individual-specific factors in this paper. In individual-specific group, prior studies have indicated that sociallymotivated extrinsic motivations, extrinsic motivation withimmediate pay-offs, intrinsic motivation, and competitors’

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 3

experience have a significant positive effect on the num-ber of submissions in the contests. Extrinsic motivationswith delayed pay-offs, such as learning, improving skills,earning reputations, getting employed as freelancer, andself-marketing (Kaufmann et al. 2011) affect average qual-ity of submissions, and consequently individuals’ per-formance. Rating of individuals has a significant effecton crowdsourcing performance too (Archak 2010). Inenvironment-specific group, the significant effect of num-ber of competitors, number of superstars, and number ofnon-superstars in the contest have been represented as influ-encing factors on crowdsourcing performance (Boudreauand Lakhani 2011, Boudreau et al. in preparation). Thereis another group that is named organisation-specific group.The only factor in this group that has been evaluated in priorstudies is brand strength of the crowdsourcer organisation.This factor’s significant effect on number of submissionshas been discussed in Walter and Back’s paper (Walter andBack 2009). The empirical results of related works are sum-marised in Table 1. Table 1 presents factors that affect eachmeasure of crowdsourcing performance. In prior studies,as given in Table 1, the only participation history factorthat has been found to have a significant effect on qual-ity of submissions is competitors’ rating, which is groupedinto individual-specific factors in this paper. According tothe literature in identifying effective factors on competitors’

performance in crowdsourcing contests, there is a lack ofstudy on evaluating competitors’ participation history ontheir performance. This study aims to address this researchgap using data collected from TopCoder Inc., a Web-basedplatform, which through the use of online contests, deliversoutsourced software solutions to its clients (Boudreau et al.in preparation).

3. Theoretical backgrounds and hypothesisdevelopment

In this section, individuals’ participation history factorsare deduced from various areas of research spanning frompsychology to business context.

3.1. Importance of participation recencyThe effect of recency has been demonstrated in manyareas of academic researches. Research in psychologyhas established that recency has an effect on impres-sion formation in social judgement (Richter and Kruglan-ski 1998). Here recency refers to judgements beingbased more on the late, rather than the early infor-mation. According to the literature in psychology andInformation Systems, people tend to integrate recent infor-mation into their overall judgement and weigh recentinformation more heavily than earlier information. This

Table 1. Factors influencing crowdsourcing performance.

Effective factors

Number of Average qualityGroups submissions of submissions Related works

Task-specific factors Reward (positive) Yang et al. (2008), DiPalantino andVojnovic (2009), Walter and Back(2009), Yang et al. (2009), Archak(2010) and Liu and Chen (2012)

Task type (positive)Task complexity (negative)Contest duration for the task

(positive)Individual-specific

factorsExtrinsic factors/immediate

pay-offs (positive)Daugherty et al. 2005, Chiu et al.

(2006), Nov et al. (2009), Archak(2010), Kaufmann et al. (2011) andRogstadius et al. (2011)

Extrinsic factors/delayed payoffs(positive)

Extrinsic factors/socialmotivation (positive)

Intrinsic factors (positive)Competitors’ experience in doing

tasksCompetitors’ rating (positive)

Environment-specificfactors

Number of competitors (negative) Boudreau and Lakhani (2011) andNumber of superstars (negative) Boudreau et al. in preparationNumber of non-superstars

(negative)Organisation-specific

factorsBrand-strength Walter and Back (2009)

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

4 H.J. Khasraghi and A. Aghaie

fact is known as recency effect in psychology (Hogarthand Einhorn 1992). The recency effect is due to thefact that people are encouraged to attend to more recentinformation than that gained further in the past (Hogarthand Einhorn 1992). Free recall paradigm in the psycholog-ical study of memory also indicates that people rememberthe most recent events better than the events that occurredfurther in the past (Murdoch et al. 1962).

The significant effect of recency has also been discussedrepeatedly on customer lifetime value in customer rela-tionship management (CRM) context (Ngai et al. 2009,Khajvand et al. 2011). Recency in the CRM literature isdefined as the period since the last purchase of a customer;a lower value corresponds to a higher probability of the cus-tomer’s making a repeat purchase, and as a result customerprofitability increases. Recency in CRM indicates that cus-tomers who have purchased recently tend to purchase in thefuture too. Previous studies indicate that many companiesinvolve their customers in innovation, design, development,marketing, sales, and support processes, these companiescrowdsource the processes and most of their customers par-ticipate in crowdsourcing contests and provide solutions(Leimeister et al. 2009, Lu et al. 2011). Since customersmore often purchase from the companies for which theyprovided innovations and solutions, we consider that par-ticipation habit of customers in crowdsourcing contests islike their purchase habit, and customers who have partici-pated in crowdsourcing contests recently tend to participatein the future contests too, and as a result they will havebetter performance.

In most of the crowdsourcing contests, especially inTopCoder’s contests which will be discussed in more detaillater, after each contest members could comment on solu-tions that are provided by competitors, and this phasein called ‘learning phase’. During this phase, individualsacross entire skills distribution learn more about the prob-lem and its solutions (Boudreau et al. 2012). Therefore, ifcompetitors experience this learning phase in near past theywould perform well in doing tasks. By considering the liter-ature review about recency and the learning phase of crowd-sourcing contests, this study supposes that individuals whohave contributed a solution in the near past work more effi-ciently than that who have contributed a solution further inthe past. This recognition leads to the following hypothesis:

Individual’s participation recency in crowdsourcingcontests could be defined as the period since the last partic-ipation of him in the contests. This recognition leads to thefollowing hypothesis:

H1. Competitors’ participation recency (period since theirlast participation) is negatively associated with their per-formance.

3.2. Importance of participation frequencyStudies on traditional companies indicate that employ-ees’ job experience has a significant effect on their job

knowledge, and as a result on their performance. Employ-ees who work on a specific tasks frequently will gain moreexperience at those tasks (Schmidt et al. 1986, Dokko et al.2009). Since Jeff Howe described crowdsourcing as an act ofoutsourcing the tasks, previously performed by employees,to crowdworkers (Howe 2006), we hypothesise that crowd-workers show the same behaviour as employees regardingdoing tasks. The significant effect of frequency has alsobeen studied in CRM context (Liu and Shih 2005, Ngaiet al. 2009, Khajvand et al. 2011). Frequency in the CRMcontext is defined as the number of purchases made by a cus-tomer within a certain period; the higher purchase frequencyof customer indicates greater loyalty of him. By consid-ering the literature review about frequency, individual’sparticipation frequency in crowdsourcing contests couldbe measured by the number of previous contests that anindividual has participated in. This paper hypothesise thathigher participation frequency leads to better performancein future, hence the following hypothesis is represented:

H2. Competitors’ participation frequency is positivelyassociated with their performance.

3.3. Importance of prior performanceIndividuals’ prior performance leads to their perceptionson self-efficacy. Prior studies spanning from psychologyto Information Systems support the idea that self-efficacyand outcome expectations have positive relationships(Bouffard-Bouchard 1990, Kim et al. 2012, Sun et al. 2012).Self-efficacy refers to people’s perception of their capabilityto accomplish particular tasks (Sun et al. 2012). Goal-setting theory suggests that people adjust their perceptionson self-efficacy according to their prior performance andsatisfaction levels (Locke and Latham 2002, Sun et al.2012). In academic tasks context, Bouffard-Bouchard’sfinding suggests that the perception of self-efficacy is aviable construct for comprehending capability and com-petence to do academic tasks (Bouffard-Bouchard 1990).In computer training context, there are various studiesthat focused on examining the effect of computer self-efficacy on computer training performance (Compeau andHiggins 1995, Johnson and Marakas 2000) and on ITusage (Easley et al. 2003, Venkatesh et al. 2003). Anotherprior research concentrated on the effect of Internet self-efficacy on Internet usage (Hsu and Chiu 2004, Lam andLee 2005). Research by Yi and Davis indicates that soft-ware self-efficacy has positive effect on task performancein software (Mun and Davis 2003). The empirical study byKim et al. in emergency management proved that emer-gency management self-efficacy positively affects emer-gency management (Kim et al. 2012). Prior studies in virtualcommunities context indicate that individuals’ self-efficacyhas a significant positive effect on their knowledge sharingbehaviour (Hsu et al. 2007, Sun et al. 2012). Sun et al.’s find-ings indicate that self-efficacy moderates the relationship

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 5

between motivational factors and sustained participation ofindividuals in transactional virtual communities (Sun et al.2012). Self-efficacy in the context of crowdsourcing refersto competitors’ confidence in their capability to completetasks successfully and provide the solution with superiorquality. We suppose that in crowdsourcing contests individ-uals after participating in each online task will renew theirperception of self-efficacy and self-competence by assess-ing their performance. High self-efficacy leads solvers toperceive themselves as being highly competent and havinga high chance to complete tasks successfully, as a resultthis perception will affect their performance in subsequentcontests. This paper hypothesises that competitors’ last per-formance in crowdsourcing contests affect their perceptionof self-efficacy and as a result influence their performance.Therefore, the following hypothesis is introduced:

H3. Competitors’ last performance is positively associatedwith their performance.

3.4. Importance of tenureThe significant effect of tenure has been studied in tra-ditional work organisation context. In traditional workorganisation, the employer assigns work to the workers,but in the crowdsourcing approach the worker chooseswhich tasks he wants to work for (Hirth et al. 2011). Stud-ies on employees’ performance in traditional organisationsshow that longer term employees demonstrate higher per-formance than shorter term employees (Sparrow and Davies1988, Kacmar et al. 2003, Ng and Feldman 2010). Nov et al.studied the effect of individuals’ tenure on the amount ofphotos they share in online photo sharing community. Theirresults indicate that photo sharing declines with respect tothe users’ tenure in the community (Nov et al. 2009). Incrowdsourcing contests context, individuals’ tenure can bedefined as the amount of time since they had joined the plat-form. When individuals’ tenure increases, they will be moreacquainted with contests’ processes, they could increasetheir experience and expertise, gain reputation, and alsothey could ask questions in online forums that is designedfor community members, etc. According to the related priorstudies in online communities and traditional organisationcontext, this paper proposes that individuals’ tenure willlead to their experience and consequently will affect theirperformance in the contests. Thus, the following hypothesisis represented:

H4. Competitors’ tenure is positively associated with theirperformance.

3.5. Moderating effectsModerating effects on the relationship between the indepen-dent variables have attracted many researchers’ interest inonline communities and crowdsourcing contests (Nov et al.2009, Walter and Back 2009, Yang et al. 2009, Boudreau

et al. 2012, Sun et al. 2012). Researchers argue thatthe contribution to model development will be larger ifmoderating variables are included in the research model.

3.5.1. Moderating effect on the relationship between lastperformance and performance

In the previous section, it is hypothesised that individuals’last performance has a positive effect on their performance.But individuals may have participated in a contest longtime ago and their perception of last performance andself-efficacy will disappear as time passes. This study willevaluate individuals’ recency as moderating variable inorder to better understand the effect of individuals’ lastparticipation on their performance. Therefore, this studyintroduces the following hypothesis:

H5. Competitors’ participation recency negatively moder-ates the relationship between last performance and perfor-mance.

This paper also will evaluate the moderating effect offrequency on the relationship between individuals’ last per-formance and performance. Thus, the following hypothesisis proposed:

H6. Competitors’ participation frequency positively mod-erates the relationship between last performance and per-formance.

3.5.2. Moderating effect on the relationship betweentenure and performance

In the previous section, it is hypothesised that individuals’tenure has a positive effect on their performance in crowd-sourcing contests. Tenure is known as the amount of timesince individuals had joined the crowdsourcing platform.Although the amount of time since individuals had joinedthe crowdsourcing platform may increase their experienceand as a result may affect their performance, there are mem-bers in the platform that have joined a long time ago but didnot participate even in one contest for a long time. So, thisstatement necessitates considering other factors that maymoderate tenure’s effect on individuals’ performance. Thus,the following hypotheses are proposed:

H7. Competitors’ participation recency negatively moder-ates the relationship between tenure and performance.

H8. Competitors’ participation frequency positively mod-erates the relationship between tenure and performance.

3.5.3. Interaction effect of participation frequency andparticipation recency on performance

This paper hypothesised that individuals’ participationrecency is negatively associated with performance, becausepeople remember the most recent events better than thatoccurred further in the past. Therefore, a long interval

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

6 H.J. Khasraghi and A. Aghaie

Recency

Last performance

Tenure

Performance

H1H7

H6 H8

H3

H4

H2

H9

H5

Frequency

Figure 2. Research model.

after last participation may cause individuals to forget evenrudimentary knowledge about solving problems. On theother hand, it is hypothesised that high participation fre-quency is positively associated with performance. In thissection, the following hypothesis is proposed to evaluate theinteraction effect of competitors’ participation frequencyand recency on their performance:

H9. Competitors’ participation frequency positively mod-erates the relationship between recency and performance.

Based on the theoretical background and the hypothesesdiscussed above, this study establishes a research modelwhich suggests four primary links and five moderating linksbetween factors involved in competitors’ performance incrowdsourcing contests. Figure 2 illustrates these relation-ships. In order to test the hypotheses, the model in Figure 2is represented.

4. Research methodology4.1. Data collectionHypotheses of this paper will be tested by using data col-lected from TopCoder Inc., a Web-based platform, whichthrough the use of online contests, delivers crowdsourcedsoftware solutions to its clients. TopCoder member baseinvolves almost 400,000 registered software developers.The availability of a highly talented software developersthat are interested in participating in the contests to pro-vide software solutions, make TopCoder the most reputedcrowdsourcing contest for software-related tasks (Boudreauet al. in preparation). Weekly Development competitionsare posted on TopCoder. Winning design submissions inTopCoder platform goes as input into Development compe-tition, the competitors in Development contests are requiredto provide actual code for implementing the UML design.And finally, winning Development submissions goes asinput into Assembly competition. This paper obtains com-petitors’ participation history in development contests.Data are extracted by going through members’ competi-tion history pages. The data set includes 1474 members’

information that have registered during 2001–2013 andparticipated at least in one of the development contests.In total, there are 10,547 records related to 1474 mem-bers’ participation history. Members’ who have participatedonly once in the contests are eliminated from the data set.Records related to the first participation of each memberin the contests are also omitted, since these records donot have recency, frequency, and last performance values.After these preparations, 9052 records related to partici-pation history of 950 competitors remain in the data set.One of the important components of members’ participationhistory data is the scores that they have obtained in var-ious development competitions. After each Developmentcontest, each competitor’s submission is graded by threereviewers according to the accuracy, correctness, and otherprespecified dimensions (Archak 2010). Placement of com-petitors is determined by the average score across all threereviewers. Since each competitor’s score in each contest isthe average value of the scores assigned by three review-ers and the competitors are ranked based on their averagescore in each contest, the final score in each competition isa good proxy measure of their performance in that compe-tition. Also, according to the literature, competitors’ scorein a contest is a salient predictor of their performance inthat contest (Archak 2010, Boudreau et al. in preparation).Therefore, competitors’ score is used as a proxy measureof their performance in this paper. So the scores that com-petitors obtain in each competition will be considered as adependent variable in this study.

4.2. Variable measurementTo test the hypotheses and get prediction models, it isrequired to measure the values of all potential variablesdefined in Section 3. In this part, the measurement methodthat is used for each variable is defined.Recency: This independent variable is measured by thenumber of days that have passed since the last participa-tion of a competitor in the development contests till thecurrent participation.

Frequency: This independent variable measured by thenumber of development contests that a competitor hasparticipated in, before the current participation.

Last performance: This independent variable is measuredby the score that a competitor has obtained in the last devel-opment contest he has participated in, before the currentparticipation.

Tenure: This independent variable is measured by the num-ber of days that have passed since the date that a competitorhad registered in TopCoder site till the current participation.Performance: This dependent variable is measured bythe score that a competitor obtained, and the effect ofindependent variables on this variable will be evaluated.

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 7

Table 2. Descriptive analysis of variables.

StandardVariable Mean Median deviation Min Max

Performance 88.69 90.71 9.15 22 100Recency 45.98 14 117.28 1 2240Frequency 14.69 9 16.07 1 99Last performance 87.95 90 9.78 22 100Tenure 744.72 610 603.96 3 4285

As mentioned in Section 4.1, this paper will considerscore of competitors as determinant of their performance.Since this paper aims to evaluate the effect of competi-tors’ participation history factors on the score that they haveacquired in each contest, these variables are calculated foreach competitor in each contest that he has participated in.Some records are omitted from the data set, because they donot have any value for recency, frequency, and last perfor-mance fields. The descriptive analysis of variables is givenin Table 2.

4.3. Research modelOrdinary least squares (OLS) for multiple regression anal-ysis are used to test hypotheses to determine strengthof relationships between dependent and independent vari-ables. OLS is one of the most popular approaches for testinghypothesis in cited studies on online crowdsourcing con-tests (Lakhani and Wolf 2003, Ariely et al. 2009, Yang et al.2009). Since the residuals of all variables are skewed, natu-ral log-transformed data of variables are used in this paper.The hypotheses are tested with two regression models. Inmodel 1, only the main effects of variables are included. Inmodel 2, interactions of variables are included. In model 2,since the interaction terms with tenure and last performancehave included the main effects of frequency and recency,therefore the main effect of frequency and recency is notlisted. The term ξ is random error in both models 1 and 2.

Model 1:

ln(Performance)

= β0 + β1ln(Recency) + β2ln(Frequency)

+ β3ln(Last Performance) + β4ln(Tenure) + ξ .

Model 2:

ln(Performance)

= β0 + β3ln(Last performance) + β4ln(Tenure)

+ β5ln(Recency)∗ln(Last performance)

+ β6ln(Frequency)∗ln(Last performance)

+ β7ln(Recency)∗ln(Tenure)

+ β8ln(Frequency)∗ln(Tenure)

+ β9ln(Frequency)∗ln(Recency) + ξ .

Table 3. Results of hypotheses tests.

Development contests

Variables Coefficient Model 1 Model 2

(Constant) β0 −15.666∗∗∗ −15.317∗∗∗(0.445) (0.456)

ln(Recency) β1 −0.035∗∗∗ –(0.008)

ln(Frequency) β2 0.317∗∗∗ –(0.015)

ln(Last performance) β3 3.855∗∗∗ 3.816∗∗∗(0.102) (0.107)

ln(Tenure) β4 0.005∗∗ 0.004(0.001) (0.003)

ln(Recency) * β5 −0.014∗∗ln(Last performance) (0.006)

ln(Frequency) * β6 0.045∗∗∗ln(Last performance) (0.009)

ln(Recency) * β7 −0.001ln(Tenure) (0.001)

ln(Frequency) * β8 0.003∗∗ln(Tenure) (0.001)

ln(Frequency) * β9 0.023∗∗ln(Recency) (0.009)

Number of observations 9052R2 0.316 0.317

Notes: Dependent variable: ln(Performance).Standard error enclosed in parentheses.∗Level of significance: p < 0.1.∗∗Level of significance: p < 0.05.∗∗∗Level of significance: p < 0.001.

4.4. Results and analysisIBM SPSS Statistics 21.0 software is used to do multipleregression analysis. With data collected from Topcoder, twomodels are tested and the results are summarised in Table 3.The value denoted by R2 indicates how much of the variancein competitors’ performance explained by the competitors’participation history factors. The overall results indicate thatthe relationship between competitors’ participation historyfactors and their performance is statistically significant.

Table 3 summarises the results of the hypotheses tests.Model 1 supports all H1, H2, H3, and H4. This means thatfrequency, last performance, and tenure have a significantpositive effect on competitors’ performance. Tenure wasfound to have a significant effect on performance in model 1(β4 = .005, p < .05) but not in model 2 (β4 = .004, p > .1),partially supporting H4 and indicating that its effect on per-formance may be moderated by other factors. Frequencywas observed to positively moderate the relationshipbetween last performance and performance (β6 = .045,p < .001). Furthermore, recency was observed to moder-ate the relationship between last performance and perfor-mance negatively (β5 = −.014, p < .05). These findingsprovide support for H5 and H6. No significant interactioneffects between tenure and recency were observed (β7 =−.001, p > .1); thus H7 was not supported. Frequency wasobserved to positively moderate the relationship between

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

8 H.J. Khasraghi and A. Aghaie

Figure 3. Moderating effect of frequency on the relationshipbetween tenure and performance.

tenure and performance (β8 = .003, p < .05), this findingsupports H8. The interaction effect between frequency andrecency in the regression analysis is also included, since theprior literature suggest including interaction effects betweenmoderators (Stone-Romero and Liakhovitski 2002). Resultsdemonstrate that frequency positively moderates the rela-tionship between recency and performance (β9 = .023,p < .05), this finding supports H9. Both models 1 and 2explain about the 32% variance of performance. Model 2provides more details but for predicting competitors’ per-formance in crowdsourcing contests both models are almostthe same. The supported interaction effects are illustrated inFigures 3 to 6. Figure 3 shows that tenure produced a pos-itive and significant influence on performance under highfrequency conditions. In Figure 4, the effects of last perfor-mance on performance were observed as being significantunder the low recency condition, but the relationship wasless strong under the high recency condition. This demon-strates that recency moderates the effect of individuals’ lastperformance on their performance in the contests.

Figure 5 illustrates the moderating effect of frequencyon the relationship between recency and performance. Asshown, individuals’ participation frequency moderates thenegative effect of recency on their performance. Figure 6demonstrates that the effect of last performance on per-formance is significant under both the low and high fre-quencies, but the relationship is stronger under the highfrequency.

Figure 7 shows the results of hypotheses tests. Sevenout of the nine paths exhibited a p-value of less than .05, ofwhich four paths exhibited a p-value of less than .001, whileone of the hypotheses was not significant, and another waspartially significant at the .05 level of significance.

In models 1 and 2, we only evaluated the significanteffect of competitors’ participation history factors on theirperformance without considering whether they have wonany competition or not. Therefore, models 3 and 4 are

Figure 4. Moderating effect of recency on the relationshipbetween last performance and performance.

Figure 5. Moderating effect of frequency on the relationshipbetween recency and performance.

Figure 6. Moderating effect of frequency on the relationshipbetween last performance and performance.

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 9

Recency

Lastperformanceperformance

Tenure

Frequency

Performance

–***n.s–**

+**

+***

n.s

+***+**

+***

Figure 7. Results of the research model.

constructed to evaluate the effect of competitors’ winningfrequency and winning recency on their performance. Theterm ξ is random error in both models 3 and 4.Model 3:

ln(Performance)

= β0 + β1ln(Win recency) + β2ln(Win frequency)

+ β3ln(Last performance) + β4ln(Tenure) + ξ .

Model 4:

ln(Performance)

= β0 + β3ln(Last performance) + β4ln(Tenure)

+ β5ln(Win recency) ∗ ln(Last performance)

+ β6ln(Win frequency) ∗ ln(Last performance)

+ β7ln(Win recency) ∗ ln(Tenure)

+ β8ln(Win frequency) ∗ ln(Tenure)

+ β9ln(Win frequency) ∗ ln(Win recency) + ξ .

In the above models, win frequency is an independent vari-able measured by the number of development contests thata competitor has won before the current participation. Winrecency is also an independent variable measured by thenumber of days that have passed since the last time thata competitor has won a development contests till the cur-rent participation. These variables are calculated for eachcompetitor in each contest that he has participated in.

Models 3 and 4 are tested with the data that were col-lected from Topcoder. Since most of the competitors havenot won any development contests, the records that do nothave any value for win frequency and win recency are omit-ted from the data set. After these preparations, 5981 recordsrelated to participation history of 424 competitors remainedin the data set. The descriptive analysis of variables is givenin Table 4.

Table 4. Descriptive analysis of variables.

StandardVariable Mean Median deviation Min Max

Performance 91.02 92.97 7.92 22 100Win recency 105.64 33 197.61 1 2298Win frequency 8.012 4 10.23 1 76Last performance 90.80 92.77 8.08 22 100Tenure 866.87 730 611.60 3 4282

Table 5. Descriptive analysis of variables.

Development contests

Variables Coefficient Model 3 Model 4

(Constant) β0 −14.605∗∗∗ −14.342∗∗∗(0.432) (0.422)

ln(Win recency) β1 −0.185∗∗∗ –(0.016)

ln(Win frequency) β2 0.499∗∗∗ –(0.013)

ln(Last performance) β3 3.865∗∗∗ 3.827∗∗∗(0.104) (0.107)

ln(Tenure) β4 0.006∗∗ 0.006(0.001) (0.005)

ln(Win recency) * β5 −0.128∗∗∗ln(Last performance) (0.011)

ln(Win frequency) * β6 0.151∗∗∗ln(Last performance) (0.014)

ln(Win recency) * β7 −0.003ln(Tenure) (0.001)

ln(Win frequency) * β8 0.024∗∗∗ln(Tenure) (0.001)

ln(Win frequency) * β9 0.080∗∗∗ln(Win recency) (0.008)

Number of observations 5981R2 0.411 0.415

Notes: Dependent variable: ln(Performance).Standard error enclosed in parentheses.∗Level of significance: p < 0.1.∗∗Level of significance: p < 0.05.∗∗∗Level of significance: p < 0.001.

Models 3 and 4 are tested by running OLS regres-sion on the data set and the results are summarised inTable 5.

As it is recognisable from Tables 3 and 5, the significanteffect of win frequency and win recency on competitors’performance is more than that of frequency and recency.Therefore, the moderating effect of win frequency and winrecency on the relationship between last performance andperformance and the relationship between tenure and per-formance is stronger than that of frequency and recency. TheR2 value of models 3 and 4 is about .41, which indicates thatabout 41% of variance in performance (score) could be pre-dicted by independent variables. The R2 value of models 3and 4 is more than that of models 1 and 2, which shows thatthe prediction quality of the models 3 and 4 is better.

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

10 H.J. Khasraghi and A. Aghaie

5. Discussions and implications5.1. Key findingsThis study attempts to investigate the competitors’ par-ticipation history factors affecting their performance inonline crowdsourcing contests. Model 1 was constructed toevaluate the effects of competitors’ participation recency,participation frequency, last performance, and tenure ontheir performance. Model 2 was constructed to evaluate themoderating effects of participation recency and participa-tion frequency. A number of findings can be derived fromthis study. First, it is found that participation frequency andlast performance have significant positive effect on com-petitors’ performance in online crowdsourcing contests.On the other hand, participation recency has a significantnegative effect on performance. Second, the results showthat participation frequency has a significant moderatingeffect on the relationship between last performance andperformance. That is, the effect of last performance onperformance is more significant for competitors who haveparticipated more frequently in the contests. Furthermore,the significant moderating effect of participation recency onthe relationship between last performance and performanceis also determined. That is, the effect of last performanceon performance is less significant for competitors who haveparticipated further in the past. It is also indicated that partic-ipation frequency has a significant moderating effect on therelationship between tenure and performance. It means thatthe significant effect of tenure on performance is strongerfor competitors who have participated more frequently inthe past. In contrast to the hypothesis, participation recencyhas no moderating effect on the relationship between tenureand performance. Finally, the interaction effect of participa-tion frequency and participation recency was evaluated, andthe results showed that frequency moderates the effect ofparticipation recency on performance. To see how the com-petitors’ winning history affects their performance, model3 was introduced to evaluate the effect of competitors’ win-ning frequency and winning recency on their performance.The results indicate that the significant effect of competi-tors’ winning frequency and winning recency in model 3 ismore than that of competitors’ participation frequency andparticipation recency in model 1. Furthermore, model 4 wasintroduced in which the moderating effect of competitors’winning frequency and winning recency on the relation-ship between last performance and performance and therelationship between tenure and performance is more signif-icant than that of competitors’ participation frequency andrecency in model 2. Findings of this paper indicate that theprediction quality of the models 3 and 4 is better than that ofmodels 1 and 2.

5.2. Theoretical implicationsResults of this study offer several implications for onlinecommunities and crowdsourcing researchers. This paper

tends to fill research gaps in identifying influential factorson competitors’ performance in crowdsourcing contests.Previous studies mostly were conducted to identify task-specific, environment-specific, individual-specific, andorganisation-specific factors that affect individuals’ per-formance in crowdsourcing contests, with the aim ofidentifying contests design issues, which motivate moreindividuals to participate in the crowdsourcing contests andalso improving the quality of submissions.

This paper advances prior studies by identifying com-petitors’ participation and winning history factors on theirperformance. The results of this study highlight the signifi-cant effect of participation recency, participation frequency,winning recency, winning frequency, and last performanceon competitors’ performance, which is overlooked in previ-ous studies on online crowdsourcing contests. Furthermore,moderating effects of participation recency, participationfrequency, winning recency, and winning frequency arerealised. Identifying these factors leads to determine com-petitors’ lifetime value in crowdsourcing contests. Thispaper provides a rich understanding of the effect of competi-tors’ participation history and their lifetime value on theirperformance in crowdsourcing contests.

5.3. Practical implicationsThis paper has important implications for practice as well.First, crowdsourcing platform sponsors should pay atten-tion to competitors’ participation and winning history toidentify their lifetime value. For example, competitors withhigh participation frequency, low participation recency,high winning frequency, low winning recency, and high lastperformance are considered to have high lifetime value. Byidentifying competitors’ participation frequency as an effec-tive factor on competitors’ performance, platform sponsorsneed to develop strategies to motivate individuals to par-ticipate more frequently in the contests, and consideringthe significant negative effect of participation recency, plat-form sponsors also require to apply strategies to lowercompetitors’ between-participation intervals. That is to say,if competitors participate more frequently in the contestsand do not make long intervals between their partici-pations, their performance in competitions will increasesignificantly.

5.4. Research limitationsThis paper only evaluates the effect of competitors’ partic-ipation and winning history factors on their performancein online crowdsourcing contests. Since most of the priorstudies have represented that the motivational factors havesignificant effect on individuals’ participation continuityand performance, they may play a moderating effect onthe relationship between participation and winning historyfactors and performance. This paper omits evaluating the

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 11

interaction effect of motivational factors and participationhistory factors on performance.

6. ConclusionsPrior studies have evaluated the effect of task-specific,environment-specific, individual-specific, and organisation-specific factors on competitors’ performance in crowd-sourcing contests. These finding help practitioners to designcrowdsourcing contests more efficiently. There are veryfew or no research studies that evaluate the effect of com-petitors’ participation history on their performance. Thispaper tends to address this research gap. Theoretical modelsexplaining how competitors’ participation history and win-ning history factors affect their performance are developedand tested. The results indicate that competitors’ participa-tion frequency, participation recency, winning frequency,winning recency, and last performance significantly influ-ence their performance in online crowdsourcing contests.In addition to basic models, extended theoretical modelsexplaining how influence of last performance and tenure onperformance is contingent on participation frequency, par-ticipation recency, winning frequency, and winning recencyare also developed and tested. The results show that com-petitors’ participation frequency and winning frequencymoderate the relationship between their last performanceand performance and the relationship between their tenureand performance. Furthermore, the significant moderatingeffect of participation frequency on the relationship betweenparticipation recency and performance, and the significantmoderating effect of winning frequency on the relation-ship between winning recency and performance are alsodemonstrated. This paper’s findings also indicate that par-ticipation recency and winning recency have significantmoderating effect on the relationship between last perfor-mance and performance. This study contributes to theorybuilding by indicating the significant effect of competi-tors’ participation and winning history on their performancein online crowdsourcing contests. It also contributes topractice by providing suggestions to platform sponsors indeveloping strategies to motivate individuals to participatemore frequently in the contests and also suggests plat-form sponsors to apply strategies to lower competitors’between-participation intervals.

To extend the findings of this paper, future studies couldfocus on the effect of other participation history factors, suchas individuals’ average performance, mean time betweenparticipations, mean time between winnings, etc. on theirperformance. Future studies also could use this paper’s find-ings to cluster individuals according to their participationhistory and their lifetime value. By clustering individualsinto groups, crowdsourcing platforms could recommendtasks and problems to individuals of each cluster consid-ering the specificity and complexity of problem, and theloyalty level of each cluster.

ReferencesArchak, N., 2010. Money, glory and cheap talk: analyzing strategic

behavior of contestants in simultaneous crowdsourcing con-tests on TopCoder.com. Proceedings of the 19th internationalconference on World Wide Web, New York, NY, USA, ACM,21–30. Available from: http://dl.acm.org/citation.cfm?id=1772694

Ariely, D., Bracha, A., and Meier, S., 2009. Doing good or doingwell? Image motivation and monetary incentives in behavingprosocially. The American Economic Review, 99 (1), 544–555. Available from: http://www.jstor.org/stable/10.2307/29730196

Bongard, J., et al., 2012. Crowdsourcing predictors of behav-ioral outcomes, IEEE Transactions on Systems, Man, andCybernetics, IEEE, 43 (1), 176–185. Available from: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6202707

Boudreau, K.J. and Lakhani, K.R., 2011. Incentives andproblem uncertainty in innovation contests: an empir-ical analysis. Management Science, 57 (5), 843–863.doi:10.1287/mnsc.1110.1322

Boudreau, K.J., et al., 2012. Field evidence on individual behavior& performance in rank-order tournaments. Harvard Busi-ness School Technology & Operations Management. UnitWorking Paper (13-016).

Bouffard-Bouchard, T., 1990. Influence of self-efficacy on perfor-mance in a cognitive task. The Journal of Social Psychology,130 (3), 353–363.

Brabham, D.C., 2008. Crowdsourcing as a model for problemsolving: an introduction and cases. Convergence: The Inter-national Journal of Research into New Media Technologies,14 (1), 75–90. doi:10.1177/1354856507084420

Chawla, S., Hartline, J.D., and Sivan, B., 2012. Optimal crowd-sourcing contests. Proceedings of the twenty-third annualACM-SIAM symposium on discrete algorithms, Kyoto,Japan, SIAM, 856–868.

Chiu, C.-M., Hsu, M.-H., and Wang, E.T.G., 2006. Understandingknowledge sharing in virtual communities: an integration ofsocial capital and social cognitive theories. Decision Sup-port Systems, 42 (3), 1872–1888. doi:10.1016/j.dss.2006.04.001

Compeau, D.R. and Higgins, C.A., 1995. Computer self-efficacy:development of a measure and initial test. MIS Quarterly,19 (2), 189–211.

Connolly, T., Jessup, L.M., and Valacich, J.S., 1990. Effectsof anonymity and evaluative tone on idea generation incomputer-mediated groups. Management Science, 36 (6),689–703.

Daugherty, T., et al., 2005. Organizational virtual communi-ties: exploring motivations behind online panel participation.Journal of Computer Mediated Communication, 10 (4).

Davis, J.G., 2011. From crowdsourcing to crowdservicing.IEEE Internet Computing, 15 (3), 92–94. doi:10.1109/MIC.2011.61

DiPalantino, D. and Vojnovic, M., 2009. Crowdsourcing and all-pay auctions. Proceedings of the tenth ACM conference onelectronic commerce (EC ’09). New York, NY, USA. ACM,119–128. doi:10.1145/1566374.1566392

Dokko, G., Wilk, S.L., and Rothbard, N.P., 2009. Unpackingprior experience: how career history affects job performance.Organization Science, 20 (1), 51–68.

Easley, R.F., Devaraj, S., and Crant, J.M., 2003. Relating collabo-rative technology use to teamwork quality and performance:an empirical analysis. Journal of Management InformationSystems, 19 (4), 247–268.

Estellés-Arolas, E. and González-Ladrón-de-Guevara, F., 2012.Towards an integrated crowdsourcing definition. Journal of

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

12 H.J. Khasraghi and A. Aghaie

Information Sciences, 38 (2), 189–200. doi:10.1177/016555150000000

Fort, K., Adda, G., and Cohen, K.B., 2011. Amazon MechanicalTurk: gold mine or coal mine? Association for ComputationalLinguistics, 37 (2), 413–420.

Gao, X., et al., 2012. Quality expectation-variance tradeoffs incrowdsourcing contests. Proceedings of 26th AAAI. Toronto,Canada. Available from: http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/4746/5116

Girotra, K., Terwiesch, C., and Ulrich, K.T., 2010. Idea generationand the quality of the best idea. Management Science, 56 (4),591–605.

Hirth, M., Hoßfeld, T., and Tran-Gia, P., 2011. Cost-optimal val-idation mechanisms and cheat-detection for crowdsourcingplatforms. 2011 fifth international conference on innovativemobile and internet services in ubiquitous computing, Seoul,Korea, 316–321. doi:10.1109/IMIS.2011.91

Hogarth, R. and Einhorn, H., 1992. Order effects in belief updating:the belief-adjustment model. Cognitive Psychology, 24 (1),1–55. Available from: http://www.sciencedirect.com/science/article/pii/001002859290002J

Howe, J., 2006. The rise of crowdsourcing. Wired Magazine,14 (6), 1–4. Available from: http://sistemas-humano-computacionais.wikidot.com/local–files/capitulo:redes-sociais/Howe_The_Rise_of_Crowdsourcing.pdf

Hsu, M.-H. and Chiu, C.-M., 2004. Internet self-efficacy and elec-tronic service acceptance. Decision Support Systems, 38 (3),369–381.

Hsu, M.-H., et al., 2007. Knowledge sharing behavior invirtual communities: the relationship between trust, self-efficacy, and outcome expectations. International Journalof Human–Computer Studies, 65 (2), 153–169. doi:10.1016/j.ijhcs.2006.09.003

Ipeirotis, P., 2010. Demographics of Mechanical Turk, SternSchool of Business, New York University, New York, NY,USA.

Johnson, R.D. and Marakas, G.M., 2000. Research report: the roleof behavioral modeling in computer skills acquisition: towardrefinement of the model. Information Systems Research, 11(4), 402–417.

Kacmar, K.M., et al., 2003. The interactive effect of leader-member exchange and communication frequency on per-formance ratings. Journal of Applied Psychology, 88 (4),764–772. doi:10.1037/0021-9010.88.4.764

Kaufmann, N., Schulze, T., and Veit, D., 2011. Morethan fun and money. Worker motivation in crowd-sourcing – a study on Mechanical Turk. Proceed-ings of the seventeenth Americas conference on infor-mation systems, Detroit, Michigan, 1–11. Availablefrom http://www.researchgate.net/publication/220894276_More_than_fun_and_money._Worker_Motivation_in_Crowdsourcing_-_A_Study_on_Mechanical_Turk/file/9fcfd50e5afe007d78.pdf

Khajvand, M., et al., 2011. Estimating customer lifetime valuebased on RFM analysis of customer purchase behav-ior: case study. Procedia Computer Science, 3, 57–63.doi:10.1016/j.procs.2010.12.011

Kim, M., et al., 2012. Assessing roles of people, technol-ogy and structure in emergency management systems: apublic sector perspective. Behaviour & Information Tech-nology, 31 (12), 1147–1160. doi:10.1080/0144929X.2010.510209

Lakhani, K. and Wolf, R., 2003. Why hackers do what theydo: understanding motivation and effort in free/open sourcesoftware projects, In: J. Feller, et al., eds. Perspec-tives on free and open source software. Cambridge, MA:

MIT Press, 3–22. Available from: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=443040

Lam, J. and Lee, M., 2005. Bridging the digital divide – theroles of Internet self-efficacy towards learning computerand the Internet among elderly in Hong Kong, China. In:Proceedings of the 38th annual Hawaii international con-ference on system sciences (HICSS ’05), IEEE ComputerSociety, Hawaii, USA, 266b-266b. doi:10.1109/HICSS.2005.127

Leimeister, J., et al., 2009. Leveraging crowdsourcing: activation-supporting components for IT-based ideas competition.Journal of Management Information Systems, 26 (1),197–224.

Liu, D.-R. and Shih, Y.-Y., 2005. Integrating AHP and datamining for product recommendation based on customer life-time value. Information & Management, 42 (3), 387–400.doi:10.1016/j.im.2004.01.008

Liu, T.X., et al., 2011. Crowdsourcing with all-pay auctions: afield experiment on taskcn. Proceedings of the AmericanSociety for Information Science and Technology, 48 (1),1–4.

Locke, E.A. and Latham, G.P., 2002. Building a practically usefultheory of goal setting and task motivation: a 35-year odyssey.American Psychologist, 57 (9), 705–717.

Lu, Y., Vir Singh, P., and Srinivasan, K., 2011. How to retain smartcustomers in crowdsourcing efforts? A dynamic stracturalanalysis of crowdsourcing customer support and ideation.Pittsburgh, PA: Tepper School of Business, Carnegie MellonUniversity.

Mun, Y.Y. and Davis, F.D., 2003. Developing and validating anobservational learning model of computer software trainingand skill acquisition. Information Systems Research, 14 (2),146–169.

Murdoch, B.B., Lissner, E., and Marvin, C., 1962. The serial posi-tion effect of free recall. Journal of Experimental Psychology,64 (5), 482–488.

Ng, T.W.H. and Feldman, D.C., 2010. Organizational tenure andjob performance. Journal of Management, 36 (5), 1220–1250.

Ngai, E.W.T., Xiu, L., and Chau, D.C.K., 2009. Applica-tion of data mining techniques in customer relation-ship management: a literature review and classification.Expert Systems with Applications, 36 (2), 2592–2602.doi:10.1016/j.eswa.2008.02.021

Nov, O., Naaman, M., and Ye, C., 2009. Motivational, structuraland tenure factors that impact online community photo shar-ing. Proceedings of the 3rd int’l AAAI conference on weblogsand social media, San Jose, CA, USA, 138–145.

Richter, L. and Kruglanski, A.W. 1998. Seizing on the latest: moti-vationally driven recency effects in impression formation.Journal of Experimental Social Psychology, 34 (4), 313–329.doi:10.1006/jesp.1998.1354

Rogstadius, J., et al., 2011. An assessment of intrinsic andextrinsic motivation on task performance in crowdsourcingmarkets. Proceedings of the fifth international AAAI con-ference on weblogs and social media, Barcelona, Spain,321–328.

Schmidt, F.L., Hunter, J.E., and Outerbridge, A.N. 1986. Impactof job experience and ability on job knowledge, work sam-ple performance, and supervisory ratings of job performance.Journal of Applied Psychology, 71 (3), 432.

Sparrow, P.R. and Davies, D.R., 1988. Effects of age, tenure,training, and job complexity on technical performance.Psychology and Aging, 3 (3), 307–314.

Stone-Romero, E.F. and Liakhovitski, D., 2002. Strategiesfor detecting moderator variables a review of conceptual

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014

Behaviour & Information Technology 13

and empirical issue. Research in Personnel and HumanResources Management, 21, 333–372.

Sun, Y., Fang, Y., and Lim, K.H., 2012. Understanding sustainedparticipation in transactional virtual communities. DecisionSupport Systems, 53 (1), 12–22. doi:10.1016/j.dss.2011.10.006

Venkatesh, V., et al., 2003. User acceptance of informationtechnology: toward a unified view. MIS Quarterly, 27 (3),425–478.

Vukovic, M., 2009. Crowdsourcing for enterprises. In: Pro-ceedings of the 2009 congress on services – I, IEEEComputer Society, Washington, DC, 686–692. doi:10.1109/SERVICES-I.2009.56

Walter, T. and Back, A., 2011. Towards measuring crowdsourcingsuccess: an empirical study on effects of external factors.The 6th Mediterranean conference on information systems,Limassol, Cyprus, 1–12.

Yang, J., Adamic, L., and Ackerman, M., 2008. Crowdsourcingand knowledge sharing: strategic user behavior on taskcn.Proceedings of the 9th ACM conference on electronic com-merce. ACM, New York, NY, USA, 246–255. Available fromhttp://dl.acm.org/citation.cfm?id=1386829

Yang, Y., Chen, P., and Pavlou, P., 2009. Open innovation: anempirical study of online contests. International conferenceon information systems (ICIS), Phoenix, AZ, 1–16. Availablefrom: http://aisel.aisnet.org/icis2009/13/

Dow

nloa

ded

by [

Han

ieh

Java

di K

hasr

aghi

] at

16:

20 2

2 Se

ptem

ber

2014