A primer on research in mediated environments: Reflections on cybermethodology (Working paper series...

24
1 A primer on research in mediated environments: Reflections on cybermethodology (Working paper series #14.2.m) Mary Aiken, RCSI CyberPsychology Research Centre, Dublin, Ireland. Ciarán Mc Mahon, RCSI CyberPsychology Research Centre, Dublin, Ireland.

Transcript of A primer on research in mediated environments: Reflections on cybermethodology (Working paper series...

1

A primer on research in mediated environments: Reflections on cybermethodology

(Working paper series #14.2.m)

Mary Aiken,

RCSI CyberPsychology Research Centre, Dublin, Ireland.

Ciarán Mc Mahon,

RCSI CyberPsychology Research Centre, Dublin, Ireland.

2

Abstract

Recent controversies surrounding the ethics of research on social networks have

come at a time when social science research itself is undergoing somewhat of a

transformation process. In this paper we consider how the changing technological

environment has produced difficulties for lawmakers, and consequently moral

decision-making. We attempt to differentiate between human and machine data,

and furthermore to situate the researcher in cyberspace, a context which is

necessarily both inter- and trans-disciplinary. Fundamentally, our purpose is to

encourage scholars of this context to reflect on their biases and prejudices, both in

terms of theory and method, prior to engaging in any research which has both a

human and technological component.

3

“... for from such crooked wood as man is made of, nothing perfectly straight can be built.”

(Kant, 1784, p. 20)

Introduction

Remarking upon technological developments of recent times has become something

of a rhetorical ritual in academia, so much so that this brief paper has chosen to mark a new

era by omitting anything of the kind. What follows is a preliminary attempt to establish a

platform for academic research involving human use of information communication

technology, as we approach an era of ubiquitous computing, 5G connectivity, cyber-physical

systems, wearable devices, internet of things, smart cities, driverless cars, 3D printing, virtual

reality gaming, commercial drones, robot caregivers and so on. Fundamentally, the question

of what it means to be human is being asked more repeatedly now than ever before and our

human research methodology must be reflected upon. All of these new technologies present

incredible potential for human society, but also significant risk to secure societies. The

purpose of this brief paper is not to review current and expected research methods in this

context – as such would be almost a much larger piece of work and almost immediately out

of date – but rather to deliver what might be termed prolegomena to any future study of

cyberbehaviour.

The Facebook/PNAS paper

In June 2014 a paper was published which serves as a useful exemplar for this discussion,

both because of the methodological issues it used and the public furore it raised (Kramer,

Guillory, & Hancock, 2014). The study itself, which has been subsequently deemed ‘socially

irresponsible’ by the British Psychological Society (Bullen & Oates, 2014), is simple enough

in its original idea but complicated in its execution. At its heart is conflict between two

4

simple ideas: does being exposed to other people’s emotions make us more or less likely to

experience those same emotions? Notably, in the year preceding when the data was gathered

for this study (January 2012), there had been a certain amount of press ('Is Facebook making

us sad?', Copeland, 2011) and research (Jordan et al., 2011) suggesting that being exposed to

positive, happy expressions of emotion on Facebook could make users of the site experience

negative, sad emotions – a theory alluded to in the closing paragraphs (Kramer et al., 2014, p.

8790). Consequently, the aim of the study was to test for ‘emotional contagion’ – whether

being exposed to positive emotions make people more or less likely to express positive

emotions, and vice versa.

However, the scientific value of these questions are not under scrutiny: it is the

methodology which used to ask them which leaves something to be desired. Over 600,000

Facebook users had their Newsfeeds subtly altered over the period of a week, so that they

would see either less positive or less negative emotional content, with the dependent variable

being the amount of positive or negative emotional content users produced in response

(Kramer et al., 2014). The problems with this experiment are substantial: users were not

notified of this experiment, so there was no informed consent, long a hallmark of human

subjects research. It does not seem that minors were removed from the experiment, another

important aspect of research practice. Research has shown that many children under the age

of 13 have Facebook profiles (Holloway, Green, & Livingstone, 2013), despite the fact that

under US law the minimum age requirement is 13 (“Children’s Online Privacy Protection

Act,” 1998), and is stated as such under Facebook’s own terms (Facebook, 2014). Therefore

the following question must be asked: how many of the 600,000 subjects were minors, and

more importantly how many were under 13? In the context of cyber ethics this is an

important issue: we do not yet have credible age authentication procedures.

5

While Facebook’s current Terms of Service includes reference to data being used for

research, this was not the case at the time of the data collection (Hill, 2014). Additionally, it

seems very much that this project was effectively not subject to normal institutional review

board practices in academia – when the study was presented to Cornell University’s panel,

the data had already been collected by Facebook, so it was not deemed to present a risk to a

human population (Carberry, 2014). In the wider context, while there has been much public

and academic outcry about ordinary people being experimented on, this does not seem to be

something that Facebook comprehend – being more concerned about how the experiment was

communicated, rather than what it connoted (Krishna, 2014).

While the publication of this paper raises a lot of important issues, such as the funding

of public research (Vertesi, 2014), its legality, both in the EU (O’Brien, 2014) and US

(Meyer, 2014) contexts, the more salient discussions have been regarding the concepts of

ethics and power in big data social science research methodology (Crawford, 2014;

Schroeder, 2014; Tufekci, 2014). As the traditional psychological experimental method itself

is a social relationship and historical development (Danziger, 1994), in this new cyber-

environment we are consequently on the brink of a new construction and should proceed

carefully.

Ethics, morality and law

At the outset, we should first consider our values and biases about how we approach this

problem space. While the vast majority of the nations of the world have signed the 1948

Universal Declaration of Human Rights, how those rights and responsibilities are interpreted

in a technologically-mediated 21st century is as of yet unclear, and differs according to

jurisdiction. We are now almost daily seeing consecutive and varying interpretations of what

the internet is supposed to be – from the United States Supreme Court (American

6

Broadcasting Cos., Inc., et al. v. Aereo, 2014), to the Court of Justice of the European Union

(Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, 2014), to the

Turkish Constitutional Court (“Turkish high court lifts YouTube ban,” 2014). On the one

hand, due to the cross-cutting nature of the internet, we may see a decline in the very concept

of national jurisdictions, and on the other, in reaction, we could have an increasing retreat

into local, firewalled intranets: both possibilities representing certain moral perspectives

which need reflection upon. A recurrent trope in many public discussions is something along

the lines of the ‘free and open internet’, where exactly this exists, or has ever existed, remains

unclear, let alone what it actually means.

Additionally, we have continuing revelations from Mr Edward Snowden, sparking all

manner of discussions and developments, which seem far from resolved (Dale, 2013). There

is also ongoing international declarations in Brazil (“NETmundial Multistakeholder

Statement,” 2014) and Estonia (“Tallinn Agenda for Freedom Online,” 2014) on the political

future of the internet, including academics and researchers, such as the Association for

Computers and the Humanities (“Open Letter on Net Neutrality,” 2014). While certain

researchers can refer to the Association of Internet Researcher statements on ethics

(Markham & Buchanan, 2012), but how strong such positions are in the face of vacillating

legal frameworks and changing methodologies is hard to know. As such, we are in very

uncertain times as regards the legal, moral and ethical fundamentals of internet research, and

these questions are likely to remain unanswered for some time, presenting challenges for

security research in particular.

Human vs. machine data

An essential observation in this maelstrom is that we are essentially dealing with issues of

data, not necessarily knowledge, and that at least some of that data is personally identifiable

7

data (Ohm, 2010) and as a result, the people involved will have both identity and territorial

claims on it (Mc Mahon & Aiken, 2014). Yet on the other hand, some of the data available in

cyberspace may not be personally identifiable at all – the question is where we can draw the

line between human and machine. For example, if a person is tweeting publicly, can we

legitimately assume that, as it is public, they no longer have any rights over that information,

and that we may analyse and parse it any way that we like, with our machine intelligence? Or

is there something barbaric about this, about what tragic figure Carmen Hermosillo (2004)

called the ‘commodification of human interaction’?

This in itself feeds into deeper cultural issues of what we are using information

communication technology to do at all: are we trying to make computers more like people, or

make people more like computers? In the former case, there are issues like forcing machines

to forget information after a certain time period (e.g. Snapchat; Shein, 2013) and in the latter,

there are ideas like the ‘quantified self’ (Wolf, 2010), where users attempt to structure and

organise their lives more logically and with digital feedback.

We can strive to make the real world more virtual, and yet also attempt to establish

real world constraints in cyberspace – the point is that cyber impacts on the real world, and

vice versa (Slane, 2007), and this is where ethics is essential. This dichotomy is the crux of

much debate and on which each researcher must reflect on individually, but reflect

nonetheless.

Location of the researcher

Moreover, there are issues around the very position of the investigator themselves. As

anthropologists and ethnographers know, to fully understand a context, researchers often

immerse themselves in the culture they are studying (Gupta & Ferguson, 1997). However, as

students of psychology will know from the famous ‘On being sane in insane places’ paper,

8

our “perceptions and behaviour [are] controlled by the situation, rather than being motivated

by a malicious disposition” (Rosenhan, 1973, p. 257). Fundamentally, in a cyberspace

context, which is increasingly both ubiquitous (Resnick, 2013) yet personalised (Pariser,

2011), whether we want to either research at a remove, or at close quarters, we may

encounter problems. How can we research objectively a context which we now find it

difficult to escape? On the other hand, particularly with regard to issues of cyber security and

law enforcement how do we effectively research less accessible reaches of the internet? And

how do we maintain objectivity therein?

At any rate, when we consider the cyber-environment, we may like to reflect on the

observation that traditional research methodology (e.g. Robson, 2002) is beginning to look

quaint. Consider, for example, that notion of researcher distance – that one should attempt to

be objective, take a representative sample of the population in question and observe it

solemnly. Such an ancient and cautious scientific practice is becoming rapidly disrupted by

big data collection, exemplified by a move away from desktop level datasets towards what is

being dubbed ‘N=(all)’ (Mayer-Schönberger & Cukier, 2013). It is not, for example, beyond

the realms of possibility that a researcher would be examining such data set within which

their own personally identifiable information would be present – the map-maker within the

map. How are such complications dealt with? This is Hacking’s (1995) ‘looping effects’ of

human categorisations taken to a new degree: not merely categorising others and thereby

affecting them, but categorising ourselves also, and affecting ourselves in the process.

More to the point, what is concomitant on the perspective of big data as a disruptive

force on the established methodology of the human sciences, is its apparent espousal of an

atheoretical perspective (C. Anderson, 2008; Williamson, 2014). Such an outcome, which

would be similar to outcomes observed when the internet has disrupted other industries, like

music and journalism (Raven, 2012), would have interesting consequences, to say the least.

9

The critical concept in such a context is a temporal one – where a big data mechanism, not

simply taking large volumes of data, but high velocities of data (Kitchin, 2013), and where

analysis is post-hoc and without hypotheses, not to mention where publication may not be

subject to peer-review. Such a process is much faster than standard academic research

protocols, and as such, we may find ourselves querying the utility of the hypothetico-

deductive method. This is not least because, with organisational and political pressures on

researchers to produce insight into current techno-social problems, there may simply not be

the time to engage in leisurely multi-year research projects. As such, we may need to

appreciate not only more applied research methodologies, but also action research (Lewin,

1946), grounded theory (Glaser & Strauss, 1967) and artistic experiences (Benson, 2013).

How else can we study cyberspace? Like Heraclitus, we should recognise that we can never

enter the same internet twice. That cyberspace is in a constant state of flux, being changed by

human ingenuity, does give hope for its improvement, but should also question the

epistemological status of research findings therein.

Interdisciplinarity & transdisciplinarity

It has long been a contention of interdisciplinary studies that we should not assume that

nature fits into neat disciplinary boundaries, less so with a fast-evolving psycho-technical

environment. Consequently, any successful future academic endeavour in this context will

require both interdisciplinary efforts in a practical sense, but also transdisciplinary theoretical

perspectives (Suler, 2013). Cyberpsychology is an exemplification of how combination this

can, and indeed must, be achieved, requiring input from psychology and computer science,

but also similarly recent enterprises such as network science, data visualisation and digital

humanities. Vishik (cited in “Belfast 2014: 4th World Cyber Security Technology Research

Summit,” 2014, p. 8) notes “the multi-disciplinary nature of cyber security attacks is

10

important, attacks happen for different reasons, only some of which are technical, other

reasons include, for example, socioeconomic issues.” We need to understand all of these

reasons to develop cyber security strategies’. Therefore does investigation regarding

cybersecurity require a multi-disciplinary approach to avoid conceptualisation of cyberspace

as a ‘disciplinary land grab’ in a new research frontier. Rather, each participating discipline

should question relevance of its traditional theoretical constructs and research methodologies

and question their suitability to cyber domains.

Normal academic practice is also under review from changes in the publication

process, as we now have a variety of different possibilities for dissemination from paywalled

peer reviewed journals to open access (Jeffery, 2006). Choices in this regard have significant

ethical consequences as it effectively connotes a decision around secrecy (Schöpfel & Prost,

2013). This is a particularly salient point of reflection in all areas of cyber research, given the

much lauded ‘free and open’ hyperbole around the internet, but particularly in cyber security

studies where it may not always be wise, for law enforcement or national security reasons for

example, to allow public access to research. More to the point, in this fast changing

environment, we need to realise that findings may become obsolete much faster than in other

areas of science, and choose our publication strategy accordingly – for example, does what

has been written on cyberbullying prior to the advent of the smartphone still hold true?

State of the cyberpsychology paradigm

We recognise that pace of technological developments has fast outstripped the pace of human

psychological development (McGuckin, C. cited in O’Keeffe, 2014), no matter however

either may be conceptualised. The critical task for cyberpsychology as a discipline is to build

up a body of established findings of how human beings experience technology. For example,

cyber-specific concepts such as online disinhibition (2004), hyperpersonal communication

11

(Walther, 1996), self disclosure (Joinson, 2001), and multitasking (Ophir, Nass, & Wagner,

2009) are becoming well recognised. While recognising that such findings, may not endure

(Walther, 2009), they are with us at present.

As a general rule, we should appreciate the possibility that people act differently in

cyberspace than they do ‘in real life’ and is of significance. This is something which

mainstream psychology, and society in general, has resisted for some time - that what

happens online isn’t ‘real’. Such a minimisation was tragically realised in the 2014 Isla Vista

shootings, where a brief, polite, face-to-face interview was forensically valued ahead of a

‘vast digital trail’ of murderous intent (Davis, 2014). We must move away from ‘digital

dualism’ and start thinking more in terms of ‘augmented reality’ (Jurgenson, 2011), and

recognise that “the virtual complicates the physical, and vice versa” (Slane, 2007, p. 97).

Critically, this necessitates the abandonment of the idea of ‘computer mediated

communication’ in favour of a paradigm that sees cyber-space as an environmental construct,

a layer of psychological enhancements to our normal experience. While the concept of the

digital native has been overegged somewhat (Bennett & Maton, 2010; Selwyn, 2009), we still

need to consider our own digital literacy: exactly how fluent are we in the language of

cyberspace? And how does this colour our perceptions of what we research?

Equalisation and evangelism

The problem is that we have several years of ‘technology evangelism’ (Lucas-Conwell, 2006)

which has not exactly produced a great deal of critical commentary (Ungerleider, 2013). This

should also be viewed alongside the corporate and political varieties of ‘cyber-libertarianism’

(Denegri-Knott, 2006). In fact, for several years research laboured under the illusion that

increasing technology would be a social liberator, levelling the playing field for all genders,

social groups and identities. The equalisation hypothesis, however, has not proven to be well-

12

supported – with anonymous online environments not proving to produce less gender

stereotyping than face-to-face contexts (Postmes & Spears, 2002), and environments with no

obvious gender preferences being observed as extremely gender biased (Vasilescu, Capiluppi,

& Serebrenik, 2012). These should also be viewed alongside issues such as cyberbullying

(Tokunaga, 2010), the manifestation of sadism online as trolling (Buckels, Trapnell, &

Paulhus, 2014) and the deliberate construction of false online product reviews (E. Anderson,

2013). In sum, the use of the online environment to find entertainment in conflict, without

risk or accountability (Hardaker, 2010). Again, the purpose of this paper is not to provide a

literature review of any kind, but to make the point that we need to be mindful of the

possibility that the introduction of technology to any human context may not solve all of our

problems, and it may well make it worse. As such, our research methodology needs to be

aware of the very real demographic fault lines which still run through cyberspace and the

potential that our work could exacerbate them further.

Reflections

As such, bearing the preceding vistas in mind, the following questions will serve as our

reflection tools principle prior to engaging in research. They are by no means exhaustive and

will continue to be edited going forward. Adopting a framework from Robson (2002), viz.

1. Developing ideas

a. Are we working from standard hypothetico-deductive inquiry or some

other practice – grounded theory or action research?

b. Or do we think we can dispense with theory entirely? On what grounds?

c. Should a researcher of cyber domains have a basic level of cyber-literacy?

How do we conceptualise and measure this?

13

d. From where do we, as researchers draw our ethical guidance? And what

are the particular problems associated with ethics in cyberspace?

2. Designing the enquiry

a. Are we critically and fairly dealing with the evangelism around

technology’s message of equalisation and liberty?

b. Are we treating cyberspace as a real and valid environment, or are we

minimising it? Alternatively, should we be minimising it?

c. Will the results of our methodology be generalisable beyond the context of

cyberspace?

d. Where is the role for real world methodologies such as the ‘face to face

interview’ regarding investigation of cyber behaviour? Should virtual

research methodologies be employed exclusively in this context?

3. Tactics: the methods of data collection

a. Are we too close or not close enough to the lived experience of whoever

created the data we are examining?

b. Have we adequately dealt with notions such as informed consent,

participant protection, age authentication and vulnerable populations?

c. Of the behaviour we are observing, how much is structured by the

technology in which it occurs?

d. Can we assume that psychometric scales and measures that have been

validated offline be valid online?

4. Dealing with the data

a. Is our data human or machine produced? Why are we drawing the line,

where we are drawing it, and how can we confirm that distinction?

14

b. Does anyone own data we are examining, and how would they feel about

what we are doing with it?

c. How are analytical methods selected?

d. Have we considered consent with regard to personally identifiable

information, adequately dealt with anonymity and how data is stored?

Does such a thing as absolute anonymity online even exist?

e. How are findings presented? How do authors decide on the open access

issue? What could be the unintended consequences of publicising this

research?

It is next to impossible to produce formal methodological guidelines for research in

cyberspace at present, but what is useful is to reflect on traditional, current and developing

research practices. By producing these questions, we hope to begin a meaningful analytic

discourse on cybermethodology and enrich knowledge production as a result. Such a

conversation is essential to the continued production of useful knowledge for the success of

secure societies.

15

References

American Broadcasting Cos., Inc., et al. v. Aereo, I. (2014). Supreme Court of the United

States, 13-461. Retrieved from http://www.supremecourt.gov/opinions/13pdf/13-

461_l537.pdf

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method

Obsolete. Wired. Retrieved June 26, 2014, from

http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory

Anderson, E. (2013). Deceptive Reviews: The Influential Tail. Retrieved June 02, 2014, from

http://web.mit.edu/simester/Public/Papers/Deceptive_Reviews.pdf

Belfast 2014: 4th World Cyber Security Technology Research Summit. (2014). In Centre for

Secure Information Technologies, Queens University Belfast.

Bennett, S., & Maton, K. (2010). Beyond the “digital natives” debate: Towards a more

nuanced understanding of students’ technology experiences. Journal of Computer

Assisted Learning, 26(5), 321–331. doi:10.1111/j.1365-2729.2010.00360.x

Benson, C. (2013). Acts not tracts! Why a complete psychology of art and identity must be

neuro-cultural. In Art and Identity: Essays on the Aesthetic Creation of Mind (pp. 39–

66). Amsterdam: Rodopi.

Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun.

Personality and Individual Differences. doi:10.1016/j.paid.2014.01.016

16

Bullen, K., & Oates, J. (2014, July 1). Facebook’s “experiment” was socially irresponsible.

The Guardian (Letters). Retrieved from

http://www.theguardian.com/technology/2014/jul/01/facebook-socially-irresponsible

Carberry, J. (2014). Media statement on Cornell University’s role in Facebook “emotional

contagion” research. Cornell University Media Relations Office. Retrieved July 03,

2014, from http://mediarelations.cornell.edu/2014/06/30/media-statement-on-cornell-

universitys-role-in-facebook-emotional-contagion-research/

Children’s Online Privacy Protection Act. (1998). United States Congress.

Copeland, L. (2011). Is Facebook making us sad? Stanford University research and Sherry

Turkle’s new book Alone Together suggest that social networking may foster loneliness.

Slate. Retrieved July 03, 2014, from

http://www.slate.com/articles/double_x/doublex/2011/01/the_antisocial_network.html

Crawford, K. (2014). The Test We Can—and Should—Run on Facebook. The Atlantic.

Retrieved July 03, 2014, from

http://www.theatlantic.com/technology/archive/2014/07/the-test-we-canand-shouldrun-

on-facebook/373819/

Dale, J. (2013). Facebook privacy is a joke: How Edward Snowden changed my online

habits. rabble.ca. Retrieved January 06, 2014, from

http://rabble.ca/news/2013/08/facebook-privacy-joke-how-edward-snowden-changed-

my-online-habits

Danziger, K. (1994). Constructing the Subject: Historical Origins of Psychological Research

(p. 254). Cambridge University Press. Retrieved from

17

http://books.google.ie/books/about/Constructing_the_Subject.html?id=c9XuIXsn-

X8C&pgis=1

Davis, J. (2014). UCSB: A Terrible Lesson in Digital Dualism and Misogyny. Cyborgology.

Retrieved June 26, 2014, from http://thesocietypages.org/cyborgology/2014/05/28/ucsb-

a-terrible-lesson-in-digital-dualism-and-misogyny/

Denegri-Knott, J. (2006). Sinking the Online “Music Pirates:” Foucault, Power and Deviance

on the Web. Journal of Computer-Mediated Communication, 9(4), 00–00.

doi:10.1111/j.1083-6101.2004.tb00293.x

Facebook. (2014). How old do you have to be to sign up for Facebook? Facebook.com.

Retrieved from https://www.facebook.com/help/210644045634222

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for

qualitative research. New York: Aldine de Gruyter.

Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, M. C. G. (2014).

Court of Justice of the European Union, C-131/12. Retrieved from

http://curia.europa.eu/juris/documents.jsf?num=C-131/12

Gupta, A., & Ferguson, J. (1997). Anthropological Locations: Boundaries and Grounds of a

Field Science (p. 275). University of California Press. Retrieved from

http://books.google.com/books?hl=en&lr=&id=C4fUmMDEbUIC&pgis=1

Hacking, I. (1995). The looping effects of human kinds. In D. Sperber, D. Premack, & A. J.

Premack (Eds.), Causal cognition: A multi-disciplinary debate (pp. 351–383). Oxford:

Clarendon.

18

Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From

user discussions to academic definitions. Journal of Politeness Research. Language,

Behaviour, Culture, 6(2), 215–242. doi:10.1515/JPLR.2010.011

Hermosillo, C. (2004). Introducing Humdog: Pandora’s Vox Redux. The Alphaville Herald.

Retrieved June 26, 2014, from

http://alphavilleherald.com/2004/05/introducing_hum.html

Hill, K. (2014). Facebook Added “Research” To User Agreement 4 Months After Emotion

Manipulation Study - Forbes. Forbes. Retrieved July 03, 2014, from

http://www.forbes.com/sites/kashmirhill/2014/06/30/facebook-only-got-permission-to-

do-research-on-users-after-emotion-manipulation-study/

Holloway, D., Green, L., & Livingstone, S. (2013). Zero to eight: Young children and their

internet use. Retrieved from http://eprints.lse.ac.uk/52630/1/Zero_to_eight.pdf

Jeffery, K. G. (2006). Open Access: An Introduction. ERCIM News. Retrieved July 04, 2014,

from http://www.ercim.eu/publication/Ercim_News/enw64/jeffery.html

Joinson, A. N. (2001). Self-disclosure in computer-mediated communication: The role of

self-awareness and visual anonymity. European Journal of Psychological Assessment,

31, 177–192. Retrieved from http://elearn.jku.at/wiki/images/f/fd/Joinson00.pdf

Jordan, A. H., Monin, B., Dweck, C. S., Lovett, B. J., John, O. P., & Gross, J. J. (2011).

Misery has more company than people think: underestimating the prevalence of others’

negative emotions. Personality & Social Psychology Bulletin, 37(1), 120–35.

doi:10.1177/0146167210390822

19

Jurgenson, N. (2011). Digital Dualism versus Augmented Reality. Retrieved from

http://thesocietypages.org/cyborgology/2011/02/24/digital-dualism-versus-augmented-

reality/

Kant, I. (1784). Idea for a Universal History from a Cosmopolitan Point of View. In L. W.

Beck (Ed.), On History (trans. L.W. Beck, R.E. Anchor and E.L. Fackehelm) (pp. 11–

26). Indianapolis and New York: The Bobbs-Merrill Co. 1957.

Kitchin, R. (2013). Big data and human geography: Opportunities, challenges and risks.

Dialogues in Human Geography, 3(3), 262–267. doi:10.1177/2043820613513388

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-

scale emotional contagion through social networks. Proceedings of the National

Academy of Sciences, 1320040111–. doi:10.1073/pnas.1320040111

Krishna, R. J. (2014). Sandberg: Facebook Study Was “Poorly Communicated.” Wall Street

Journal. Retrieved July 03, 2014, from

http://blogs.wsj.com/digits/2014/07/02/facebooks-sandberg-apologizes-for-news-feed-

experiment/

Lewin, K. (1946). Action research and minority problems. Journal of Social Issues, 2(4), 34–

46.

Lucas-Conwell, F. (2006). Technology Evangelists: A Leadership Survey. In SDForum

Conference on “Technology Leadership and Evangelism in the Participation Age” (pp.

1–33).

Markham, A., & Buchanan, E. (2012). Ethical Decision-Making and Internet Research (pp.

1–19).

20

Mayer-Schönberger, V., & Cukier, K. (2013). Big Data: A Revolution that Will Transform

how We Live, Work, and Think (p. 242). Houghton Mifflin Harcourt. Retrieved from

http://books.google.com/books?hl=en&lr=&id=uy4lh-WEhhIC&pgis=1

Mc Mahon, C., & Aiken, M. (2014). Privacy as identity territoriality: Re-conceptualising

behaviour in cyberspace. Dublin, Ireland. Retrieved from

http://ssrn.com/abstract=2390934

Meyer, M. N. (2014). Everything You Need to Know About Facebook’s Controversial

Emotion Experiment. Wired. Retrieved July 03, 2014, from

http://www.wired.com/2014/06/everything-you-need-to-know-about-facebooks-

manipulative-experiment/

NETmundial Multistakeholder Statement. (2014). Retrieved June 26, 2014, from

http://netmundial.br/netmundial-multistakeholder-statement/

O’Brien, D. (2014). Facebook Research, Timeline Manipulation, & EU Data Protection Law.

The DOBlog. Retrieved July 03, 2014, from http://obriend.info/2014/07/01/facebook-

research-timeline-manipulation-eu-data-protection-law/

O’Keeffe, A. (2014). Parents warned over “secret” world of cyber-bullying. Independent.ie.

Retrieved June 26, 2014, from http://www.independent.ie/irish-news/parents-warned-

over-secret-world-of-cyberbullying-30185576.html

Ohm, P. (2010). Broken promises of privacy: Responding to the surprising failure of

anonymization. UCLA Law Review, 57, 1701–1777.

21

Open Letter on Net Neutrality. (2014). The Association for Computers and the Humanities.

Retrieved June 26, 2014, from http://ach.org/activities/advocacy/open-letter-on-net-

neutrality/

Ophir, E., Nass, C., & Wagner, A. D. (2009). Cognitive control in media multitaskers.

Proceedings of the National Academy of Sciences of the United States of America,

106(37), 15583–7. doi:10.1073/pnas.0903620106

Pariser, E. (2011). The Filter Bubble: What The Internet Is Hiding From You (p. 304).

Penguin UK. Retrieved from http://books.google.com/books?id=-

FWO0puw3nYC&pgis=1

Postmes, T., & Spears, R. (2002). Behavior online: Does anonymous computer

communication reduce gender inequality? Personality & Social Psychology Bulletin,

28(7), 1073–1083.

Raven, P. G. (2012). Breaking the fall. In A. Reynolds, M. Atwood, S. Baxter, H. Rajaniemi,

P. G. Raven, M. J. Harrison, & C. Miéville (Eds.), Arc 1.1: The Future Always Wins (p.

137). Arc. Retrieved from http://books.google.com/books?id=M8EsnEV9-fMC&pgis=1

Resnick, M. L. (2013). Ubiquitous Computing: UX When There Is No UI. Proceedings of the

Human Factors and Ergonomics Society Annual Meeting, 57(1), 1007–1011.

doi:10.1177/1541931213571225

Robson, C. (2002). Real World Research: A Resource for Social Scientists and Practitioner-

Researchers (p. 599). Wiley. Retrieved from

http://books.google.ie/books/about/Real_World_Research.html?id=DkplMcAysFQC&p

gis=1

22

Rosenhan, D. L. (1973). On being sane in insane places. Science, 179, 250–258.

Schöpfel, J., & Prost, H. (2013). Degrees of secrecy in an open environment. The case of

electronic theses and dissertations. ESSACHESS. Journal for Communication Studies,,

6(2), 65–86.

Schroeder, R. (2014). Facebook and the Brave New World of Social Research using Big

Data. The Policy and Internet Blog. Retrieved July 03, 2014, from

http://blogs.oii.ox.ac.uk/policy/facebook-and-the-brave-new-world-of-social-research-

using-big-data/

Selwyn, N. (2009). The digital native – myth and reality. Aslib Proceedings, 61(4), 364–379.

doi:10.1108/00012530910973776

Shein, E. (2013). Ephemeral data. Communications of the ACM, 56(9), 20.

doi:10.1145/2500468.2500474

Slane, A. (2007). Democracy, social space, and the internet. University of Toronto Law

Journal, 57(1), 81–105. doi:10.1353/tlj.2007.0003

Suler, J. (2004). The online disinhibition effect. CyberPsychology & Behavior, 7(3), 321–

326. doi:10.1089/1094931041291295

Suler, J. (2013). Cyberpsychology as Interdisciplinary, Applied, and Experiential. RCSI

CyberPsychology Research Centre. Retrieved June 26, 2014, from

http://cypsy.com/News/Cyberpsychology_as_Interdisciplinary

Tallinn Agenda for Freedom Online. (2014). Retrieved June 26, 2014, from

http://www.freedomonline.ee/foc-recommendations

23

Tokunaga, R. S. (2010). Following you home from school: A critical review and synthesis of

research on cyberbullying victimization. Computers in Human Behavior, 26(3), 277–

287. doi:10.1016/j.chb.2009.11.014

Tufekci, Z. (2014). Facebook and Engineering the Public. Medium. Retrieved from

https://medium.com/message/engineering-the-public-289c91390225

Turkish high court lifts YouTube ban. (2014). JURIST. Retrieved June 26, 2014, from

http://jurist.org/paperchase/2014/05/turkish-high-court-lifts-youtube-ban.php

Ungerleider, N. (2013). Why Tech Journalism Is Like Food Journalism. Medium. Retrieved

June 26, 2014, from https://medium.com/tech-journalism-thoughts/why-tech-journalism-

is-like-food-journalism-3d657738f4fc

Vasilescu, B., Capiluppi, A., & Serebrenik, A. (2012). Gender, Representation and Online

Participation: A Quantitative Study of StackOverflow. 2012 International Conference

on Social Informatics, 332–338. doi:10.1109/SocialInformatics.2012.81

Vertesi, J. (2014). Facebook Experiment Shows How Researchers Rely on Tech Companies.

TIME. Retrieved July 03, 2014, from http://time.com/2950699/facebook-experiment-

social-science-funding/

Walther, J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and

hyperpersonal interaction. Communication Research, 23(1), 3–43.

Walther, J. B. (2009). Theories, Boundaries, and All of the Above. Journal of Computer-

Mediated Communication, 14(3), 748–752. doi:10.1111/j.1083-6101.2009.01466.x

24

Williamson, B. (2014). The death of the theorist and the emergence of data and algorithms in

digital social research. The Impact of Social Sciences. Retrieved June 26, 2014, from

http://blogs.lse.ac.uk/impactofsocialsciences/2014/02/10/the-death-of-the-theorist-in-

digital-social-research/

Wolf, G. (2010). The quantified self. In TED@Cannes. Cannes, France. Retrieved from

http://www.ted.com/talks/gary_wolf_the_quantified_self.html