CHECKPOINT - DiVA-Portal
-
Upload
khangminh22 -
Category
Documents
-
view
1 -
download
0
Transcript of CHECKPOINT - DiVA-Portal
CHECKPOINT A case study of a verification project during the 2019 Indian election
By: Linus Svensson Supervisor: Walid Al-Saqaf Södertörn University | School of Social Sciences Bachelor’s essay 15 credits Subject | Spring semester 2019 Journalism and Multimedia
Abstract This thesis examines the Checkpoint research project and verification initiative that was
introduced to address misinformation in private messaging applications during the 2019
Indian general election.
Over two months, throughout the seven phases of the election, a team of analysts verified
election related misinformation spread on the closed messaging network WhatsApp. Building
on new automated technology, the project introduced a WhatsApp tipline which allowed users
of the application to submit content to a team of analysts that verified user-generated content
in an unprecedented way. The thesis presents a detailed ethnographic account of the
implementation of the verification project. Ethnographic fieldwork has been combined with a
series of semi-structured interviews in which analysts are underlining the challenges they
faced throughout the project.
Among the challenges, this study found that India’s legal framework limited the scope of the
project so that the organisers had to change approach from an editorial project to one that was
research based. Another problem touched the methodology of verification. Analysts perceived
the use of online verification tools as a limiting factor when verifying content, as they
experienced a need for more traditional journalistic verification methods. Technology was
also a limiting factor. The tipline was quickly flooded with verification requests, the majority
of which were unverifiable, and the team had to sort the queries manually. Existing
technology such as image match check could be further implemented to deal more efficiently
with multiple queries in future projects.
Keywords: verification, collaboration, fact-checking, misinformation, India
This study was made possible by funding from the Swedish International Development
Cooperation Agency, SIDA, through the Minor Field Studies program.
Table of Content
1 Introduction .................................................................................................................... 1
1.1 Purpose of study ................................................................................................................... 2
2 Background ..................................................................................................................... 4
2.1 The ‘WhatsApp murders’ ....................................................................................................... 4 2.2 Internet penetration and connectivity in India ...................................................................... 5
2.3 Political propaganda and disinformation ............................................................................... 6
2.4 Response to the misinformation epidemic ............................................................................ 7
3 Theoretical Framework & Literature overview ................................................................ 9
3.1 Journalism as a discipline of verification ................................................................................ 9
3.2 The fact-checking movement .............................................................................................. 12 3.2.1 Terminology around fake news .......................................................................................................... 13
3.3 The Indian context .............................................................................................................. 14 3.3.1 Motivation for spreading misinformation .......................................................................................... 14
4 Methodology ................................................................................................................. 16
4.1 Participant observation ....................................................................................................... 16 4.1.1 A regular day ....................................................................................................................................... 17
4.2 Semi-structured interviews ................................................................................................. 18
5 Findings and Discussion ................................................................................................. 20
5.1 Stakeholders ....................................................................................................................... 20 5.1.1 Pop-Up Newsroom .............................................................................................................................. 20 5.1.2 PROTO ................................................................................................................................................. 21
5.2 Laying the ground for Checkpoint ........................................................................................ 22
5.3 The Checkpoint team .......................................................................................................... 24
5.4 Launching Checkpoint ......................................................................................................... 25
5.5 The verification procedure .................................................................................................. 26
5.6 Crowdsourcing messages from the WhatsApp tipline .......................................................... 27
5.7 Sorting user requests .......................................................................................................... 30 5.7.1 Deciding what to verify ....................................................................................................................... 30
5.8 Monitoring social media ...................................................................................................... 34
5.9 Methodology of verification ................................................................................................ 35 5.9.1 Use of official sources ......................................................................................................................... 35 5.9.2 A deviation from methodology ........................................................................................................... 45 5.9.3 Setting a verdict .................................................................................................................................. 47
5.10 Evaluation ......................................................................................................................... 49 5.10.1 A gradually improved verification process ........................................................................................ 51 5.10.2 Limitations of online verification tools ............................................................................................. 52 5.10.3 Lack of clarity in the research process .............................................................................................. 54 5.10.4 Role of Facebook – too little too late? .............................................................................................. 56
6 Conclusion ..................................................................................................................... 58
References ........................................................................................................................ 60
Table of Figures Figure 1. Screenshot of a tweet received through the tipline (Check). .................................... 30 Figure 2. A meme received through the tipline (Check). ........................................................ 31 Figure 3. Screenshot of a manipulated image received via the tipline. The text “NaMo again!” has been added to the boy’s t-shirt (Check). ............................................................................ 37 Figure 4. Screenshot of the Check verification task list. Analysts followed the task list and checked each box upon completion of the verification step (Check). ..................................... 38 Figure 5. A screenshot of a tweet received via the tipline. The tweet could be traced to Narendra Modi’s official Twitter handle and proved to be authentic (Check). ....................... 39 Figure 6. A manipulated image depicting candidate Kanhaiya Kumar (Communist Party of India, CPI) as standing in front of a distorted map (Check). ................................................... 42 Figure 7. Screenshot of a Facebook post. In the meme, it is argued that the Gandhi family enriched themselves whilst the ISRO was being underfunded. ............................................... 43 Figure 8. A photo of Abhinandan’s doppelganger (Check). .................................................... 47
1
1 Introduction In November 2018, ahead of the 2019 general election, fact-checkers and journalists from
across the industry met in New Delhi to attend a workshop, seeking to define some of the key
challenges that information disorder imposes on the industry and society at large. The
workshop was organised by Pop-Up Newsroom, an organisation founded by media innovators
Dig Deeper Media and Meedan, and hosted by civic media start-up Proto. Participants
reached a consensus that rumours and misinformation spread on encrypted platforms1, such as
the messaging network WhatsApp (which was acquired by Facebook in 2014), is one of the
biggest challenges faced by fact-checkers and journalists alike and a serious threat to Indian
democracy. Participants discussed how a collaborative project could address this challenge
(Bell, F., personal communication, May 23, 2019).
The workshop resulted in the Checkpoint research project, commissioned by Facebook. Proto,
a partner of the International Center for Journalists, ran the operation on the ground from its
office in New Delhi. The organisational framework was designed by Dig Deeper Media.
Checkpoint sought to map the misinformation ecosystem on encrypted platforms and to
identify election-related misinformation patterns. For this purpose, it introduced a WhatsApp
tipline, building on new automated technology, which allowed a team of analysts to gather
and verify user-generated content in an unprecedented way. This was made possible thanks to
technological assistance from Meedan and WhatsApp (Proto, 2019).
Misinformation would be crowdsourced from regular WhatsApp users whom were
encouraged to submit “suspicious” claims they encountered on the private messaging app.
Other than just collecting data, Checkpoint analysts were to verify these claims and send back
verification reports to users (ibid.).
Over the past few years Dig Deeper Media and Meedan have organised a series of so called
Pop-Up Newsrooms – temporary, collaborative reporting initiatives, often focused on fact-
checking – in countries all over the world (see Electionland, 2016,. Martínez-Carrillo &
Tamul, 2019., WAN IFRA, 2019). The Pop-Up Newsroom concept can be summarised under
1 By encrypted platforms I refer to platforms that support end-to-end encryption between communicating peers.
2
the slogan ‘innovation through collaboration’. By building joint projects, involving actors
from the media industry and beyond, they hope to generate insights and find solutions to the
key challenges that the media industry faces today (see Pop-Up Newsroom, u.d.).
The phenomenon could be seen in the light of a rising global fact-checking movement, one
that “widens boundaries of fact-checking as journalistic practice” (Graves 2018, p. 617) by
transcending national borders and different disciplinary fields such as civil society, academia
and the technology sector.
Although Checkpoint was not a pure fact-checking initiative like previous Pop-Up
Newsrooms, it still dealt with a core aspect of fact-checking: the discipline of verification.
The project was also designed based on workflows, technology and key insights from
previous projects. It thus carried some of the significant traits of the Pop-Up Newsroom
concept, adjusted to the Indian context.
1.1 Purpose of study
This study examines how the Checkpoint project crowdsourced and verified user-generated
content from WhatsApp during the 2019 Indian general election. In a time when user-
generated content has become an integral part of journalism, new demands on verification are
raised as can be exemplified by BBC’s UGC hub (see BBC, 2017).
Verification is a central task in fact-checking and journalism, but, as we shall see, it is not
equal to fact-checking. The study examines the methodology of verification, as adopted by
Checkpoint, and how it was implemented during the verification effort.
The study also seeks to examine a trend of international collaborative media projects led by
Pop-Up Newsroom. Checkpoint will pose as a case study to understand how the pop-up
concept travels across borders and adjusts to unique circumstances, in this case the context of
the Indian election. I thereby strive to answer Graves’ (2018) call for more research on how
“institutional ties beyond journalism” affects practice (p. 627).
Furthermore, I hope to shed light on a notable gap in the research on fact-checking and
misinformation in India. Previous studies have examined political fact-checking processes
3
and misinformation primarily in an American context (see Graves, 2013), but there is a lack
of research focusing on the Indian subcontinent.
For these purposes, the study poses the following research questions:
RQ1: How was the Checkpoint project implemented to tackle mis/disinformation during
the 2019 Indian elections?
RQ2: What obstacles and challenges did the project face during its implementation?
RQ3: How did the team members perceive the successes and failures of the project?
RQ1 seeks to lay the foundation to this thesis by presenting how the frameworks and
workflows were implemented in the project and examining how analysts crowdsourced and
verified user-generated content from the WhatsApp tipline. RQ2 examines the challenges its
stakeholders faced while implementing the project. RQ3 seeks to evaluate the project by
giving emphasis to the experiences of the involved team members.
The project went on for four months, spanning over the whole election through two phases.
First, the data collection phase sought to collect crowd-sourced data from the official
WhatsApp tipline. I will also refer to this phase as the verification phase, since the
verification effort was enrolled simultaneously. The verification phase will be the focus of
this study, which builds on some 300 hours of ethnographic fieldwork within the workplace
combined with semi-structured interviews with the team members of the Checkpoint project.
The post-election data analysis phase saw analysts from the team conducting a content-
analysis on the amassed data. This subsequent phase is out of scope for this study, as I was
not present during this time. The findings of the Checkpoint team will be published by the
International Center for Journalists in a separate report independent from this thesis.
4
2 Background
This chapter illustrates the impact of information disorder in India. It examines the
technological context, in which recent years development has created the conditions for a
thriving misinformation ecosystem, where WhatsApp has become an important
communication tool and carrier of misinformation. It also examines how political parties have
contributed to that ecosystem. Lastly, I present an overview of some of the measures that
different stakeholders have taken to contain the spread of misinformation. It shows that
Facebook has taken a more pro-active stance in its fight against misinformation, in which the
Checkpoint project is only one of several responses that Facebook has initiated.
2.1 The ‘WhatsApp murders’
On July 13, 2018 Mohammad Salman and his friend Mohammad Azam were attacked and
killed by a lynch mob in a small village in Karnataka. The mob claimed that the two were part
of a child abduction ring. Mr Salman barely escaped and survived the beatings, albeit with
severe injuries. He last saw his friend, Mr Azam, dragged away by the mob with a noose
around his neck. Mr Azam later died from his injuries, according to media reports (Satish,
2018).
The mob attacked the two men after rumours, sparked by a viral video, had circulated in local
WhatsApp groups. In the video, two men on a motorcycle can be seen abducting a child on a
street. The video warned Indians of a child abduction ring operating in the country, with the
intent to kidnap children and harvest their organs. However, the video proved to be fake. Not
only was the video shot in neighbouring Pakistan – the sequence had in fact been cut out of a
Pakistani kidnap awareness video (Elliott, 2018).
Still, the video and its resulting rumours got traction all over the country, resulting in a series
of attacks on innocent victims. The incidents linked to the child abduction rumours form part
of the notorious ‘WhatsApp murders’, as dubbed by the some media outlets, in which at least
33 people were reportedly killed by lynch mobs as a result of misinformation spread on the
platform between January 2017 and July 2018 (India Spend, 2018; Chaudhuri & Jha, 2019)
(Safi, 2018).
5
2.2 Internet penetration and connectivity in India
India, with its 390 million internet users, has the second greatest population on the Internet
after China. Although internet penetration in the country is low, it is increasing rapidly –
some 30 percent of the Indian population is connected to the internet. In 2015, the amount of
connected users grew by 40 per cent, to 277 million people, higher than the previous year’s
growth rate of 33 per cent (Kaur & Nair, 2018).
The development is largely due to a trend in decreasing rates of mobile data and greater
availability of affordable smart phones. The entry of Indian telecom firm Reliance Jio into the
Internet service provider market resulted in decreased prices and affordable data plans (Kaur
& Nair, p. 2). In 2019, India offers mobile data at the cheapest rate in the world (Cable, 2019).
With some 430 million smartphones users, India is the second largest market for smart
phones, second only after China (Livemint, 2019).
From August 2013 to February 2017 the number of users connected to messaging platform
WhatsApp rose from 30 million to 200 million users (Statista 2019), making India the
platform’s biggest global market (Iyengar 2019). An annual report published by Reuters
Institute for the Study of Journalism suggests that a majority of Indians consumes news from
their smartphones, as claimed by 68 % of its respondents. The report revealed that WhatsApp
is the biggest platform in India, used by 82 % of respondents, while 52 % said they got news
from the messaging application (Aneez, et al., 2019).
WhatsApp, as other social media networks, has made it easier for people to share news and
information with each other. It also facilitates consuming and creating multimedia content,
particularly effective in a country like India where the literacy level is relatively low. The
wide use of groups within the app paired with the forward function, which allows users to
spread information by the click of a button, makes WhatsApp a “potent medium for reaching
out to masses” (Farooq, p. 107).
The debate remains unsettled among scholars as to whether or not the technological
development and the surge of social media have enhanced political participation. Some
scholars argue that the technological development enhanced online mobilization around
political issues, while others argue that it only “reinforced existing patterns” so that “educated
6
voters continued to participate online, while poorer and less educated citizens were unable to
participate effectively due to limited knowledge and technological access” (Chadha & Guha
2016, p. 4390). Yet, the rise of Internet connectivity has prompted the political parties to
change their approach in communicating with the electorate (see Chadha & Guha).
2.3 Political propaganda and disinformation
The social media wings of the political parties, more commonly referred to as ‘it-cells’, have
embraced social media as a tool for political campaigning. The governing Bharatiya Janata
Party, or BJP, was one of the early entrepreneurs in the matter (see Chadha & Guha, 2015).
The party’s use of social media to spread its political message is often mentioned as a key
factor to explain its success in the 2014 Lok Sabha elections2, when 66.4 % of registered
voters turned out to vote in favour of the party, making it the first party to score an absolute
majority in parliament since 1984 (Chadha & Guha, 2015). Using a grass-roots approach,
where voters and volunteers were reached via social media channels, the party saw an
“unprecedented involvement of ordinary citizens”, who took to social networks to “engage
potential supporters by sharing campaign-related materials such as videos and memes and
encouraging them to mobilize others to volunteer and donate as well.” (Chadha & Guha
2015).
This led to the creation of “hundreds of small cells” all over India. According to Chopra
(2014), their objective was to: “pick the news, put up pictures and articles that criticize the
ruling Congress party and praise Modi or the BJP. They are the online crusaders who actively
counter anti-Modi coverage” (p. 56).
In their interviews with party volunteers, Chadha & Guha (2014) found that ready-made
campaign material was distributed from the top to the grass roots level. The material consisted
of “a variety of images, posters, charts, and infographics that highlighted successes in BJP-
ruled states” (p. 4399). Many of the memes and hashtags that were shared by volunteers were
also mandated from the top level, such as the trending hashtag #AbkibaarModiSarkaar (“this
2 The Indian general elections.
7
time a Modi government”). The interviewees expressed that they were instructed to actively
avoid “polarizing issues such as religion” (p. 4400).
However, media reports suggest that disinformation often originates from the it-cells.
According to Bloomberg, 300 workers were hired by the BJP it-cell to “inflame sectarian
differences, malign the Muslim minority, and portray Modi as saviour of the Hindus”
(Bloomberg 2018). Another report published by Newslaundry claims that BJP it-cell workers
in Uttar Pradesh, India’s most populated state, were mandated to spread propagandistic or
factually incorrect messages in WhatsApp groups to woo voters during the 2017 Legislative
Assembly election (Bhardwaj 2017).
Due to a lack of transparency, it is difficult to hold party officials liable for disinformation
spread on social media networks and closed messaging applications. As Campbell-Smith &
Bradshaw (2019) put it, “relying on volunteers and paid workers allows the blurring of
boundaries between campaigning, trolling and propaganda” (p. 5). This makes it hard to
distinguish the spread of disinformation by unpaid volunteers, acting on their own mandate,
and those hired by party it-cells.
At times, misinformation on social networks have seeped through verification filters at
mainstream media outlets. The terrorist attack by Pakistan-based terrorist organisation Jaish-
e-Muhammad in Kashmir, in which 40 Indian soldiers were killed, triggered a wave of online
disinformation. Mainstream channels in India and Pakistan published news stories that
amplified rumours and misinformation about the attack (Campbell-Smith & Bradshaw 2019,
p. 1). In 2017, fact-checker Alt-News identified a number of “fake news stories” that were
published by reputable news outlets such as Zee News, India Today and The Hindu (Jawed
2018).
2.4 Response to the misinformation epidemic
In December 2016, Facebook enrolled its fact-checking program. Independent fact-checking
partners, verified through the International Fact-Checking Network (IFCN), fact-checks and
8
rates posts on the platform3, submitted by users. After fact-checkers have rated a post as false,
Facebook places it lower in the newsfeed, reducing future views by over 80% on average.
Pages that frequently distributes content rated as false by partners will get their distribution
reduced on the platform (Lyons, 2018).
In February 2019, ahead of the Indian general elections, Facebook announced that it was
expanding the fact-checking program in the country, adding five more partners to the
network. Fact-checkers such as India Today Group, Factly and Fact Crescendo joined the list
of partners (PTI, 2019), increasing their number to a total of eight organisations (Facebook
u.d.)
On April 1, 2019, Facebook took down 687 pages and accounts for engaging in “coordinated
inauthentic behavior” on the platform. The pages and accounts were linked to individuals
associated with a Indian National Congress4, INC, it-cell. From August 2014 until March
2019, the accounts had spent a total of 39,000 US dollars on Facebook ads (Gleicher 2019).
Another 15 pages, linked to the Indian IT firm Silver Touch, were taken down. Silver Touch
has been associated with the BJP, for whom it developed the NaMo app, an app featuring pro
BJP news (Patel & Chaudhuri 2019). The pages spent a total of 70,000 US dollars on ads
from June 2014 to February 2019.
WhatsApp has been pressured by the Indian government to counter the spread of
misinformation on its platform. In July 2018, the IT Ministry issued a statement containing a
stern warning: “If they [WhatsApp] remain mute spectators they are liable to be treated as
abettors and thereafter face consequent legal action” (PIB, 2018).
WhatsApp has since introduced new features on its platform, such as limiting the forwarding
function to a number of five groups per forwarded message and also labelling the messages
with a “Forwarded” tag (WhatsApp, 2018a,. WhatsApp, 2018b). In August 2019, it presented
the “Frequently Forwarded” function to alert its Indian users of messages that have been
forwarded five or more times (Carlsen, 2019).
3 In August 2019, Facebook expanded its fact-checking program in the US to cover Instagram for its American audience (Tardáguila, 2019). 4 Indian National Congress is the political party that has governed the Indian republic for most of its history.
9
The Indian government itself has taken measures to curb misinformation spread on social
media with Internet shutdowns in affected areas. According to a report by Freedom House
(2018), the country “leads the world in the number of internet shutdowns, with over 100
reported incidents in 2018 alone.” The report concludes this strategy to be a “blunt
instrument”, as it interrupts not only the spread of disinformation but also the use of regular
online services (Shahbaz 2018). Anecdotal evidence also suggest that the spread of
misinformation continues in spite of internet shutdowns (Funke, et al., 2019).
Legal measures have also been taken. The controversial Section 66A of the Information
Technology Act criminalised distribution of “offensive content” online, but was deemed
unconstitutional by the Supreme Court in May 2015. Still, several people have been arrested
and charged under Section 66A (Johari 2019). On May 9, 2019, BJP worker Priyanka Sharma
was arrested after she shared a political meme on Facebook targeting West Bengal chief
minister Mamata Bannerjee. The charge was later dropped, and the Supreme Court ordered
the immediate release of Sharma, on the condition that she made a public apology (Anand
Choudhary 2019). The event sparked a debate about how legislation encroaches on freedom
of speech.
3 Theoretical Framework & Literature overview
3.1 Journalism as a discipline of verification
The “correspondence” theory of truth views truth as something that “corresponds to the facts
of reality”. Facts, indisputable in their nature, exist outside of systems of value and are not
subject to interpretation (David in: Graves, 2017, p. 520). In the nineteenth century,
journalists saw themselves as purveyors of truth. They unearthed these facts and presented
them to their audiences – news reflected reality. Schudson calls this “naïve empiricism”
(Schudson, 2001 in: Graves, 2017). Kovach & Rosenstiel note a similar school of thought
among journalists, the concept of realism. Realism is the perception that truth is graspable in
the form of facts – facts that speak for themselves, and by simply collecting and presenting
them, journalists could purvey the truth to its audience. In the first half of the twentieth
century, journalists began to worry about the naivete of realism, as they developed a “greater
recognition of human subjectivity” (p. 102). Journalists could never be free of biases and
10
prejudices. The influential American journalist Walter Lippman called for a new method, in
line with “the scientific spirit”, which did not ignore human subjectivity, but used certain
mechanisms to minimize this subjectivity and in such a way get to the truth. This lays the
ground for the modern objectivity ideal. “The call for objectivity was an appeal for journalists
to develop a consistent method of testing information–a transparent approach to evidence–
precisely so that personal and cultural biases would not undermine the accuracy of their
work.” (p. 101).
In a democratic system, the core of journalism is to give citizens the information they need to
make informed decisions. Journalism’s first obligation is therefore to the truth, as Kovach &
Rosenstiel (2014) write in The Elements of Journalism. The truth-seeking in journalism is
what differentiates it from propaganda, entertainment, fiction or art. Kovach & Rosenstiel
define this primary function of journalism as a ‘Journalism of Verification’. However, the rise
of the 24-7 news cycle, fuelled by the twenty-first century’s rapid digitalisation, the growth of
the Internet and the fragmentation of audiences, factors such as speed and competition have
been given precedence over verification.
The development has pushed journalism in other directions. Kovach & Rosenstiel distinguish
several veins of journalism that have changed the logic in media production. The authors
notes a shift from a journalism of verification to a ‘Journalism of Affirmation’. As the
digitalised media landscape, revolutionised by the Internet, fragmented audiences, a new type
of journalism arose where audiences were reached by reassurance and “the affirming of
preconceptions” (Kovach & Rosenstiel 2014, p. 64). The ‘Journalism of Aggregation’ are the
new platforms that aggregate content from media outlets without verifying the content
themselves, and by recommendations or algorithms make the news readily available for
others.
The conception of these new strains in journalism is putting higher demands on the audience
as “The burden of verification has been passed incrementally from the news deliverer to the
consumer” (Kovach & Rosenstiel 2014, p. 65).
Despite these changes, the media commonly claim objectiveness by emphasizing their
impartiality. This is usually done by the narrative of a “neutral voice”. A story is balanced by
including different points of view, and can thus achieve an appearance of fairness due to the
11
sole fact that two sides are presented equally. There are always many sides to a story, but
fairness and balance should never be invoked for their own purpose or as the goal of
journalism, the authors argue (p. 109).
For instance, if there is a consensus among scientists that the effects of global warming are
real, it would be a disservice to truthfulness and to the audience if journalists would give as
much space to both sides of the debate in the name of impartiality.
Balance is not always a means to get at the truth, but can be used by the media to claim
impartiality; “a veneer atop something hollow”.
Years before, Tuchman (1972) noted the same phenomena. She saw objectivity among
‘newspapermen’ as a strategic ritual to defend their work from public criticism. The practice
of objectivity, as claimed by journalists, consists of different procedures. Through
presentation of conflicting possibilities (what Kovach & Rosenstiel calls “balancing a story”)
multiple statements by differing sides in a conflict are presented. These statements are treated
as equally valid truth-claims, although the facts might not have been verified, or perhaps
aren’t verifiable. The ‘newspaperman’ claims objectivity by presenting both sides of the
conflict, leaving it to the reader to evaluate both truth-claims.
Another such procedure is the judicious use of quotation marks, in which the journalist
removes his/her presence from the story by citing interviewees or statements from others,
telling the story through quotes rather than through the voice of the reporter. In fact, the
reporting might still be subject to selective bias as the journalist masks his own opinion under
citations aligned with his own sympathies.
Such procedures can at most be said to be tools used to obtain objectivity, however they
cannot reach a true objective practice, according to Tuchman.
Tuchman further elaborates on the objectivity ideal in Making News (1978). Journalism can
never truly reflect reality, since journalism cannot be truly objective.
News is a window on the world. Through its frame, Americans learn of
themselves and others, of their own institutions, their leaders, and life
styles, and those of other nations and their peoples […]
12
But, like any frame that delineates a world, the news frame may be
considered problematic. The view from a window depends upon whether
the window is large or small, has many panes or few, whether the glass is
opaque or clear, whether the window faces a street or a backyard. The
unfolding scene also depends upon where one stands, far or near, craning
one’s neck to the side, or gazing straight ahead, eyes parallel to the wall in
which the window is encased (Tuchman, 1978, p. 1).
3.2 The fact-checking movement
The fact-checking movement emerged as a “reformer’s critique of conventional journalism”
(Graves, 2013, p. 127) seeking to ”revitalize the ‘truth-seeking’ tradition in the field” (Graves,
2017). Graves (2013), much like Tuchman, saw the problem of journalism using objectivity
as a blanket cover. Graves noted that journalists are more concerned about including multiple
statements from differing parts than to actually verify those statements. He refers to this as
“he said, she said” reporting.
Fact-checking as a practice first emerged in the U.S. during the early nineties, with
newspapers fact-checking deceptive advertisements in presidential races (pp. 130-131). But it
was not until the beginning of the second Millenia that dedicated fact-checker entities
emerged. In 2003, Fact-Check.org was launched, followed by PolitiFact and the Washington
Post’s Fact Checker column in 2007.
Fact-checking should be seen as “a practical truth-seeking endeavor” (Graves, 2017, p. 523).
It is defined by Graves as the practice of “assessing the truth of public claims” made by public
figures, e.g. politicians or pundits (Graves, 2013). Graves wrote his dissertation in 2013, a
time before alternative facts and fake news entered the common vocabulary5. Arguably,
Graves’ definition of fact-checking has become less applicable today as it does not reflect the
challenges that fact-checkers are facing, when misinformation and disinformation spread on
5 The two terms are problematic. Fake news implies that news can be true or fake, when news by definition has to be factual. If it is not, it is not news but rather dis/misinformation or propaganda. Likewise, the term alternative facts implies that facts are disputable, when by definition the word fact is used to assert indisputability.
13
social networks. Neither does it fully reflect the reality of practice by today’s fact-checking
movement. For instance, Facebook’s fact-checking program is exclusively targeting
disinformation and misinformation spread on its social platforms (see Facebook, u.d.). As the
misinformation ecosystem evolves, and new efforts are introduced to address it, more
research is needed.
3.2.1 Terminology around fake news
“Fake News” was nominated as the word of the year by the American Dialect Society in
2017. Ben Zimmer, chair of the American Dialect Society’s New Words Committee,
motivated the nomination as follows:
When President Trump latched on to fake news early in 2017, he often used
it as a rhetorical bludgeon to disparage any news report that he happened to
disagree with. That obscured the earlier use of fake news for
misinformation or disinformation spread online, as was seen on social
media during the 2016 presidential campaign (American Dialect Society,
2018).
Fake news is historically not a new phenomenon, but the term became popularised during the
2016 American presidential campaign. It arose to describe fake news articles spread by
illegitimate news sites, disguised as reputable news outlets, with the intent to mislead (see
Allcott & Gentzkow, 2017). However, fake news also comes in other formats. In a country
like India, disinformation is commonly spread in the shape of memes and messages on the
private messaging platform WhatsApp (BBC, 2018).
Fake news is arguably a rather blunt and obscure term to reflect the reality of disinformation
today. This, paired with the fact that its use has been transformed into “rhetorical bludgeon”,
calls for its replacement by more specific terms.
A more useful approach is to define false information after the intent behind which it is
spread. Throughout this thesis, I will use the terms disinformation and misinformation. The
14
terms have been defined by Dr. Claire Wardle, a research fellow specialised in information
disorder, as follows.
Disinformation is false information that is deliberately created or
disseminated with the express purpose to cause harm. Producers of
disinformation typically have political, financial, psychological, or social
motivations.
[…]
Misinformation is information that is false, but not intended to cause harm.
For example, individuals who don’t know a piece of information is false
may spread it on social media in an attempt to be helpful. (Wardle, 2018).
3.3 The Indian context
Despite the emergent situation of misinformation in India, and a growing number of fact-
checking initiatives, there is a gap in research examining this context. The Indian context
imposes new challenges, unbeknownst to the American tradition of fact-checking, such as
dealing with content in a wide array of languages. Other notable differences in the
misinformation landscape are the relative absence of textual misinformation, and the
prevalence of visual information in the form of memes (BBC 2018, p. 15). The spread of
misinformation on the end-to-end encrypted messaging service WhatsApp also poses different
challenges, and requires a different approach.
3.3.1 Motivation for spreading misinformation In a report conducted by the BBC, researchers analysed a sample of ‘fake news messages’
spread on WhatsApp and interviewed Indian citizens to find out what were their reasons for
sharing information (and potentially misinformation) on social media networks.
The report found that among the reasons behind sharing behaviour, “sharing as a civic duty”
was one of the most important, to spread a message that they deemed to be in public interest
(BBC, 2018, p. 44). The findings align with the results of a survey conducted by Indian fact-
15
checker Factly, in which 48.5 % of the respondents said their main reason for sharing
information as “It might benefit others” (Pratima & Dubbudu, 2019, p. 44).
The massive amount of information that Indians are encountering seems to have blurred the
lines between what is traditionally seen as news – information disseminated by newspapers,
TV, and radio stations – and other competing sources. The researchers call the phenomenon
‘the digital deluge’ – when different types of information are available in the same space.
Traditional news is mixed with news about familiar and personal matters, in the Facebook
‘news feed’ as well as in WhatsApp, where users are often part of several groups dedicated to
family members, colleagues and politics (BBC 2018, p. 23). Since “every type of ‘news’ is in
the same space, ‘fake news’ too can be hosted there” (p. 40).
They conclude that WhatsApp works in part as an echo chamber, where “usage is about
validation of one’s beliefs and identities through the sharing of news and information” in
groups closely associated with one’s political, cultural and social beliefs (p. 36).
A sample of ‘fake news messages’ spread on WhatsApp suggested that a majority of the
misinformation was not directly political. The researchers found that 36.5 % of the fake news
messages consisted of content that could be categorised as “Scares and scams”, while only
22.4 % could be categorized as “Domestic news and politics”. 29.9 % of the messages were
categorized as “National myths” (BBC 2018, p. 43).
The sample of fake news that the researchers looked at suggested that misinformation among
the Right was found to be united in pro-Hindu sentiments. The researchers found that it
usually revolved around Hindutva ideology, or Hindu nationalism, anti-minority sentiments
directed toward Muslims and support of prime minister Narendra Modi (pp. 64-72). Among
the Left, the fake news messages were not as strongly united to an agenda, but when it was it
usually disfavoured the ruling Bharatiya Janata Party, BJP, and Narendra Modi (pp. 72-75). In
total, their data sample suggested a bigger share of the fake news messages were found among
the Right. However, as the researchers point out, other statistical measures would have to be
taken to confirm this.
16
4 Methodology
4.1 Participant observation
This study is based on data that I collected as a participant-observer within the Checkpoint
team, drawing upon two months – some 300 hours – of ethnographic field work. As a
participant-observer, I have gathered information about aspects related to workflows and
methodology by observing the everyday work, unexpected events or informal conversations
between team members. These observations have been noted on daily basis.
Participant observation gives the researcher a unique opportunity to study editorial processes
and decisions made in the workplace. Rather than only analysing the output, the researcher
gets a “behind the scenes” approach to follow processes and intra-organisational forces
behind the resulting output, thus making the “invisible visible”. Furthermore, it offers the
researcher a possibility to observe material that never made it to the production, or was later
discarded, as well as the discussions that lead up to that decision (Cottle 2009, p. 10).
Nonetheless, participant observation, as every other method, has its downsides. By focusing
too much on newsroom practices the researcher might miss out on extra-organisational forces
such as economic, technologic or political pressure and how they affect the work environment
(Cottle 2009, p. 13). It is up to the researcher to make a conscious effort to correlate
professional practices and organisational tendencies with such extra-organisational forces.
A problematic situation can arise if the participant-observer becomes too involved in the
work, leaving the observation behind and becoming a fully engaged participant. As the
researcher participate in the work, he runs the risk of influencing the workflow, changing the
professional practices in the newsroom thus compromising the reliability of the study. It is
important that the researcher is conscious of the way his presence and activities influence the
workplace, as well as how it can compromise his role as an observer.
However, shifting approach to a more participating stance, when balanced, can be beneficial.
In order to understand the workflow and the professional practices it is often necessary for the
researcher to dedicate some time to gain hands-on experience by doing the same tasks as
everyone else. Personal relations with other participants can improve much to the advantage
17
of the researcher. The researcher can be seen as one in the team, whereas from a strictly
observing approach he can be seen as an outsider.
I entered the Checkpoint team as a participant-observer on the condition that I would help
with some tasks where help was needed. I agreed to this arrangement, provided that the
Checkpoint leadership would not interfere in my work as a researcher. I did not see this
arrangement as compromising my role as a participant-observer, as participating in some of
the tasks, I found, was absolutely essential. I needed to spend time working on daily tasks in
order to get an understanding of the methodology, the tools and the software used.
Participating in the everyday work did not mean that I left my role as a researcher behind, as I
continuously took notes about my involvement in all tasks.
The leadership proved to be much understanding that my primary task at the project was to do
independent research, and thus I could balance my time between helping with tasks and
conducting interviews or observe as I saw fit.
4.1.1 A regular day
Every day started with a morning meeting. I would take notes to summarize what was said
and by whom. As the day went by I would walk from desktop to desktop and ask the team
members questions about their tasks. These were informal conversations, where I enquired
about the piece of content they were working with in that particular moment. Sometimes I
chose to stay with an analyst as they proceeded with verification. This was done in a
subjective manner, whenever I deemed something to be interesting I stayed with that person
to observe the verification process, what steps were taken to reach a verdict, what decisions
were made and what challenges the analyst faced.
Later during the day I would follow up with the team member to see how their work had
progressed throughout the day. Every time an analyst had completed a piece, one of the team
leaders, practically editors, would evaluate the analyst’s work before a verification report card
was sent out to the original user. The analyst and the editor would have a short conversation,
and if the editor thought that the verification report needed changes or additional information,
the analyst would do this according to the instructions from the editor, who had the final say. I
would attend these meetings and take notes of the conversations.
18
Throughout the two months that Checkpoint was operating the team received thousands of
queries. Because of the massive inflow of user requests, I could not observe each and every
item. I would personally, using my own judgement, decide which items were of interest for
my research and select them accordingly.
By the end of each day, I would review my notes and add personal reflections. These
reflections touched any interest of matter and were thought to be used for the purpose of
analysis and discussion in this thesis.
4.2 Semi-structured interviews
To complement ethnographic field word, I have conducted a series of semi-structured
interviews, fifteen in total. Nearly all team members have been interviewed including
analysts, team leaders and the founders of PROTO. I have also interviewed Fergus Bell, a
consultant from Dig Deeper Media who helped design the framework of the project. Two
team members were not interviewed, one being an intern that joined the project later in the
process and the other team member was not interviewed because of the language barrier
preventing a meaningful dialogue.
The interviews were conducted in English, which was the main language of communication in
the work environment. The interviews span from 30-45 minutes each. The first eight
interviews were conducted in April, the first month of the project. As the verification phase
came to its end, in late May, another seven interviews were conducted. Some of these were
follow-up interviews with previous interviewees.
Prior to the interviews, the respondents were informed about the purpose of the study and
gave their consent to participate as interviewees, what Brinkman & Kvale (2014) call
informed consent. The analysts were offered confidentiality, whereas those with senior
positions were not. The latter were offered transcripts of the interviews prior to the
publication of this thesis, since their names and the quotes attributed to them would be public.
Although none disputed the collected information, they were given an opportunity to do so.
19
The qualitative interview seeks to understand the world from the point of view of its
participants and to draw meaning from their experiences (Kvale & Brinkman, 2014). In this
study, interviews were centred around a series of topics ranging from methodology of
verification, evaluation of the project, opinions on misinformation and measures to tackle it,
as well as the roles of the involved stakeholders.
Each interview has been dealt with on a case-by-case basis. Interviews were personalised, and
in each case questions have been added or omitted depending on the seniority level or
specialisation of the interviewee. Since the interviews have been conducted over a period of
two months, adjustments have been made over time to correspond with real-time events in the
workplace; addressing challenging situations faced by the participants or important decisions
that impacted their work.
The semi-structured interviews have been used to triangulate and complement ethnographic
observation, seeking to extract information that has not been directly observable in the
workplace environment. By cross-referencing observations with interviews, the researcher
can also discover discrepancies or continuity between statements made by the interviewees
and their observed practices in the work space (Cottle 2009, p. 11).
The scientific utility of qualitative interviews, or lack thereof, has received a fair share of
critique in the social sciences. A common objection to the method is that the qualitative
interview is not scientific since it reflects a common sense worldview expressed by the
interviewee. It is argued that the interview is subjective rather than objective and builds its
result upon the biases of the interviewee. The nature of the interview is personal, since it
builds upon relations between the interviewer and the interviewee and requires some degree
of flexibility, which in turn compromises the rigorousness of the methodological framework.
Studies based on qualitative interviews often draw upon a low number of interviews,
rendering a result with low generalisability (Brinkman & Kvale, 2014, pp. 210 – 213).
However, the authors point out that there is no authoritative definition of science according to
which the interview can be categorised as scientific or unscientific. Many of the weaknesses
in qualitative interviews can rather be seen as strengths in a qualitative study. Interviews give
the researcher unique access to the world of the interviewee. The subjective nature of the
interview can draw insights from the interviewees in a specific context. Their biases represent
20
differences in personal perspective that let the researcher enhance qualitative understanding of
a certain phenomenon (ibid.).
5 Findings and Discussion
5.1 Stakeholders
Checkpoint was conducted by Proto, an Indian media skilling start-up. The framework of the
project was designed by Pop-Up Newsroom, a joint project between Dig Deeper Media and
Meedan. Meedan provided technological assistance to set up the tipline and Dig Deeper
Media offered consultancy for the local team. Facebook provided funding for the project
through its affiliate WhatsApp.
5.1.1 Pop-Up Newsroom
Pop-Up Newsroom was founded in 2017 by Fergus Bell of Dig Deeper Media and Tom
Trewinnard of Meedan. It strives to nurture newsroom innovation by initiating collaborative
reporting efforts in different countries and contexts, connecting journalists and fact-checkers
within the media industry and putting them in the same room as technologists and academics.
A series of such Pop-Up Newsrooms have been conducted in the past – mainly, but not
exclusively, focusing on curbing misinformation. These projects include Electionland – a
virtual newsroom that covered polling related issues on the election day of the 2016 American
presidential election (see Electionland, 2016) and Verificado, a collaborative fact-checking
initiative spanning over two months during the Mexican election (see Martínez-Carrillo &
Tamul, 2019., WAN IFRA, 2019). The former involved some 1100 journalists across the
United States and the latter some 100 journalists from 60 media partners (ibid.).
At times Bell and Trewinnard have organised Pop-Up Newsrooms involving students for
similar projects. In September 2018, students from three Swedish journalism schools set up a
newsroom – Riksdagsvalet 2018 – seeking to verify misinformation spread on social networks
ahead of the election (see Mattsson & Parthasarathi, 2018). I personally took part in this
project as an undergraduate student journalist.
21
From January until August 2019, Bell and Trewinnard were involved in various projects
centred around curbing misinformation – from Tsek.ph in the Philippines, CekFakta in
Indonesia and the target of this case study, the research project Checkpoint.
Each new project builds on experience and key insights from the last, but allows for
adaptation to every unique context.
“The reason we need something like Pop-Up Newsroom is that it allows us to iterate and
build on the previous version rather than everyone starting from scratch. And that allows us to
innovate faster and to move the journalism industry forward” (Fergus Bell, personal
communication, May 23, 2019).
More projects have been planned for the remainder of 2019, such as Reverso, a fact-checking
initiative in Argentina, and Election Exchange which will be enrolled during the US 2020
election campaign (see Reverso, u.d., Marrelli, 2019).
5.1.2 PROTO
Proto is a civic media start-up that was founded in 2018 by ICFJ Knight fellows Nasr ul Hadi
and Ritvvij Parrikh. The Knight Fellowships, a program by the International Center for
Journalists (ICFJ), is “designed to instill a culture of news innovation and experimentation
worldwide” and through collaboration with the news industry “seed new ideas and services
that deepen coverage, expand news delivery and engage citizens, with the ultimate goal to
improve people’s lives” (ICFJ). The team in India focusses on reinventing news production
and strengthening reporting on areas such as ”health, gender and development issues” (ICFJ).
The primary approach for Proto is community based co-learning. Just like Pop-Up
Newsroom, the concept behind Proto is driven by the idea of innovation by collaboration.
Nasr ul Hadi and Ritvvij Parrikh believe that by bringing people from the news industry
together, they can build meaningful relationships and learn from each other. To achieve this
purpose, they organise weekly meet-ups and bootcamps at their office in New Delhi
surrounding pressing issues faced by the industry.
“We are not going to be able to go back to grad school and take a career pick and go and learn
stuff that is new and cutting edge. The way to learn is going to come to these peer-to-peer
22
learning environments and showcase each other’s work and learning from hands-on sessions”
ul Hadi said (ul Hadi, N., private communication, May 30, 2019).
Proto directs its work at what ul Hadi calls three “crises” in media: credibility, adaptability
and sustainability. The credibility crisis is defined as the media’s struggle to stay credible in a
landscape of information disorder, adaptability is the struggle to keep up with the
technological challenges imposed on the industry and sustainability is about finding
sustainable business models as media organisations see their ad-revenue decrease (ibid.).
Checkpoint was a data driven project which correspond with the credibility crisis.
5.2 Laying the ground for Checkpoint
In November 2018 Pop-Up Newsroom hosted a workshop together with PROTO at the
latter’s premises in New Delhi. The workshop was attended by representatives across
different disciplines, from fact-checkers and journalists to technologists and academicians.
Among the domestic fact-checkers, Factly and Alt-News were present, among journalists
representatives came from outlets such as Times of India, The Indian Express, The Quint and
The Deccan Herald (F. Bell, personal communication, 2019, May 23).
The agenda of the workshop was to identify the key challenges that information disorder
imposes on the media as well as building a framework for potential solutions to curb the
spread of misinformation and preventing its impact on the Indian 2019 Lok Sabha elections.
After having defined a mission statement – much focused on targeting communal rumours
mainly spread through WhatsApp – the goal was to seek financial support to set up a joint
fact-checking initiative, involving multiple stakeholders in the spirit of previous pop-up
newsrooms (ibid.).
However, the original vision of such a collaborative fact-checking effort could not be realised.
Under the Foreign Contribution (Regulation) Act of 2010, non-Indian companies are
prohibited from funding domestic media organisations or media projects in the country
(FCRA, 2010). As Facebook – an American company – came to be the sole funder of the
project, there was no way of initiating a media project with an editorial output communicated
through broadcasting or other journalistic means and platforms.
23
Consequently, the resulting outcome took a vast turn from what was first envisioned. After
months of discussions with Facebook, the involved parties had finally redefined how they
could operate a project addressing the problem area as defined during the workshop. The
result was Checkpoint – a research project commissioned by Facebook, executed by Proto
with technological assistance from Pop-Up Newsroom’s founder Meedan, and consultancy
regarding framework design by Dig Deeper Media (PROTO, 2019).
According to Bell, a research project, although a deviation from what was first envisioned,
would still “achieve a lot of the same goals” without an editorial output (F. Bell, personal
communication, 2019, May 23). Instead of actively fighting misinformation on WhatsApp as
a pre-emptive measure, the project would gather unique data to better understand the type of
misinformation that spreads in closed messaging networks during the election, generating
insights for stakeholders in future projects.
The purpose of the research was to “map the misinformation landscape in India, especially
misinformation related to the general election” and to generate “insights on misinformation
that will be useful for journalists addressing civic issues in India” (Shalini Joshi, personal
communication, April 17, 2019).
For a period of two months, spanning over all seven phases of the election, Checkpoint would
crowdsource data from its WhatsApp tipline. By encouraging users to share ”suspicious”
content encountered on the encrypted platform, the team aimed to build a database of
misinformation and rumours that would ”otherwise not be accessible” due to the encrypted
nature of the messaging application (PROTO). By amassing this unique data, the team strived
to map out misinformation patterns on WhatsApp.
Anyone could send a verification request to the Checkpoint team – in the form of a link, text,
photo or video – and the team would assess the request accordingly by verifying the
authenticity of a claim or media file. However, verification was a secondary priority for the
team, which would be dealt with according to the team’s capacity.
24
5.3 The Checkpoint team
At the point of peak team capacity, the team consisted of ten members, among them one
intern. Eight were analysts, dealing with verification and sorting data. They were led by two
team leaders, whose roles were similar to that of an editor.
The team members come from a variety of Indian states from all over the country e.g.
Uttarakhand, West Bengal, Bihar, Kerala, Telangana and Delhi. Most of them have a
background in journalism, working for local, regional and national newspapers as well as
broadcasters distributing news in English and regional languages. Two team members had a
background in media training and research.
The tipline considered requests in five languages: Bengali, English, Hindi, Malayalam and
Telugu. English is considered an urban language, spoken mostly in the cities, while the others
are regional languages. To deal with multilingual verification requests, staff were hired on
terms of linguistic abilities so that at least one language specialists was assigned to cover
queries in each respective language. One language specialist dealt with user requests in
Malayalam, another dealt with Telugu, and a third was responsible for Bengali content.
Everyone spoke English, which was also the language used for communication at the work
place. Most could speak, read and write in Hindi – a northern Indian language – although with
varying ability, due to the fact that the analysts came from different regions in India where
Hindi is not the main language. Hindi was the second most spoken language in the work
space after English (Author’s field notes, 15-04-2019).
Prior to the launch of the project, the team went through some basic training involving some
of the tools used, such as Reveye Image Search and In-Vid for videos. Since the analysts were
not very experienced with using online verification tools, “there was a lot of learning that
people had to do very quickly” (Joshi, S., personal communication, May 28, 2019).
25
5.4 Launching Checkpoint
On April 2, 2019, Meedan announced the launch of the Checkpoint tipline (Meedan, 2019).
The announcement was amplified by some of the biggest media organisations in India and
abroad (see Sachin Ravikumar & Rocha 2019,. Bhargava 2019,. Ganjoo 2019,. Purnell 2019).
Despite the press release stating the objective of the project as research, it was widely framed
as an effort to fight misinformation during the Lok Sabha elections. The media attention soon
shifted to critique and Checkpoint was caught in the medial crossfire after several online
newspapers decided to inquire in the effectiveness of the tipline by sending verification
requests – without receiving any response (see Haskins 2019,. Mac & Dixit 2019). PROTO
answered by issuing a FAQ on its website, underlining that:
The Checkpoint tipline is primarily used to gather data for research, and is
not a helpline that will be able to provide a response to every user. The
information provided by users helps us understand potential misinformation
in a particular message, and when possible, we will send back a message to
users (PROTO, 2019).
Subsequently, The Economic Times concluded that the tipline was “of no use when it comes
to spot and remove misinformation in the upcoming general elections” (The Economic Times,
2019).
Nasr ul Hadi, founder of PROTO, answered to the critique:
Even though the press announcement clearly said that this was a research
project to understand how misinformation works during the elections within
closed [messaging] networks, people understood it to basically mean that
this is a helpline, if we send something in we will get a response back. That
was beyond the scope and the bandwidth of the project (ul Hadi, N.,
personal communication, May 30, 2019).
The verification process – a time consuming task that occupied nearly the first two months of
the project – was mainly done as a means to gather data. By sending out verification reports
26
the team hoped to encourage users to “participate in this research as “listening posts” and
send more signals for analysis” (PROTO, 2019). As Shalini Joshi, co-teamleader, pointed out:
“People would not send us queries if we would not send out verification reports back. And so
we’ll never know what is trending or what people want us to respond to if we don’t send out
these verification reports” (Joshi, S., private communication, April 17, 2019).
5.5 The verification procedure
Based on ethnographic observation and interviews with the Checkpoint team, I present an
account of the verification process including the following four steps, which will be covered
in-depth in following sections.
I. Input: Crowdsourcing
The automated process during which verification requests were crowdsourced from users
through WhatsApp and gathered in a database.
II. Sorting
User requests were evaluated by analysts who separated verifiable queries from those
unverifiable or otherwise out of scope. Verifiable queries were flagged and forwarded to one
of two team leaders for review. Flagged verification requests were evaluated by team leaders
and, if deemed verifiable, assigned to analysts for verification.
III. Verification
Analysts proceeded to verify items following a defined task list. Upon verification, items
were graded by the following scale: “True”, “False”, “Misleading”, “Disputed” or
“Inconclusive”, then sent to a team leader for approval.
IV. Output: Verification report
The team leader reviewed the verification steps taken to reach a verdict. If approved, a final
verdict was set and a report card was automatically sent to the user that submitted the initial
verification request via the WhatsApp tipline. If a verdict lacked supporting evidence, the
analyst responsible for the item was asked to look for additional evidence.
27
5.6 Crowdsourcing messages from the WhatsApp tipline
Indian fact-checkers commonly crowdsource claims from their audiences. Alt-News and
Boom Live operate helplines on WhatsApp, from which they encourage users to forward
misinformation spread in their private networks. For similar purposes, international fact-
checking collaborations such as Comprova and Verificado used tiplines during the Brazilian
and Mexican elections in 2018 (see Wardle, Pimenta, Conter, Dias, & Burgos, 2019; Owen,
2018).
The leads they get from their tiplines go through journalistic news valuing processes where
criteria such as potential for virality is considered. Fact-checkers then decide if resources
should be spent on verifying a claim. These tiplines are used as a compliment to other
methods of sourcing input such as monitoring social networks for viral claims. Since fact-
checkers are driven by journalistic principles, they want to create an editorial output exposed
to a large audience. This usually involves sharing a debunked or verified claim in their social
media channels to gain maximum traction (see Wardle, Pimenta, Conter, Dias & Burgos,
2019).
The logic behind Checkpoint was different compared to that of conventional fact-checkers.
Since it was primarily a research project it had no editorial output available to the public, nor
any intention to present its verification reports to a large audience. Its purpose was to examine
and analyse crowdsourced messages from the WhatsApp tipline and the verification process
was limited to those messages.
What made the tipline unique was that it built on new technology which allowed some level
of automation to handle user requests. Previous Pop-Up Newsroom initiatives such as
Verificado demanded that fact-checkers manually responded to the received queries (Joshi, S.,
personal communication, April 17, 2019), whereas during the Checkpoint project verification
reports were automatically sent to WhatsApp users upon verification.
Meedan provided technological assistance for Checkpoint to make the tipline possible.
Together with WhatsApp they built an interface that integrates the WhatsApp Business API
(an application interface) with Meedan’s platform Check. The WhatsApp Business API, a
feature used for businesses to communicate with clients, integrates with Check through
28
Smooch – an omnichannel conversation API (Author’s field notes, April 5, 2019,. see also
(Facebook, u.d.) (Smooch, u.d.).
Any user could add the tipline’s number in their phone book and send a message to the
Checkpoint team – including text, an image or a link to a video (Proto, 2019). All messages
entered a database on Check where analysts could overview received queries. The tipline was
semi-automated, operated by a chatbot that interpreted received messages and responded to
them according to a template of standardised responses. After having sent a message to the
number, the user was asked to confirm if s/he wished to request the team to verify it. Upon
confirmation, the API prompted the message to enter Check. The tipline maintained the end-
to-end encryption, and the user was completely anonymized in the process. No personal
metadata, such as the user’s location or phone number, was stored in the process.
Messages appeared as items in the Check database, which the team analysts would overview.
They would manually sort through the collected queries. By using a Google spreadsheet with
a set of general guidelines, the ‘Standard Operating Procedures’, analysts sorted and flagged
verifiable queries and separated them from those that were unverifiable by the methodological
standards. User requests were evaluated from criteria such as polling-related issues and
separated from those that were not suited for verification, such as spam, opinion or satire.
After an item had been marked as out of scope, a message was automatically sent out to
inform the end user that their message would not be verified.
When the tipline was launched on April 2 the Checkpoint team received an “overwhelming”
amount of user requests, as expressed by one team leader, with hundreds of WhatsApp
messages coming in the first couple of hours (Author’s field notes, April 5, 2019). By the end
of the day, that number had increased to some 25,000 items (Check). For a team of eight
analysts, this inevitably led to a time consuming sorting process, since analysts had to sort
each item manually. The majority of incoming items were not verifiable – a large portion was
considered to be spam or otherwise falling out of scope for verification. This meant that a lot
of resources had to be focused on filtering out thousands of unverifiable items. At least one
analyst sat almost full time occupied by this laborious task (Author’s field notes, April 8,
2019).
29
Hence, one of the drawbacks to the project was the lack of automation, as Nasr ul Hadi, co-
founder of PROTO, put it.
Technology was not ready for a lot of what the project required… and so
that basically meant that a lot of what we would have done [had we had the
time] would have been dealt with by the machine side of it before the
humans got involved. We ended up having to throw people at these
problems, and that was not a very productive use of our time or motivation
or headspace (ul Hadi, N., personal communication, May 30, 2019).
For some of the received queries there were dozens of duplicates. But Check had no
identification feature within the software that could automatically cluster these duplicates.
Analysts had to go through identical items and manually cluster them to a parent file, copying
the qualities from the parent file to the child file.
Identifying duplicated and related queries coming into the tipline […] We
are miles away of doing that effectively but it’s simply because I don’t think
not enough people have explored it or have been given enough time and
resources to be able to do it. It’s not because it’s not possible. We now
know what kind measures would need to be used to enable that clustering
better […] for instance, a traditional approach might have been to find a
way to look at related keywords, but in a project where most of your queries
are not keywords-based – they’re visuals-based – we have to start with
image match check which Meedan has already figured out as a problem
which they want to be able to solve6. (ul Hadi, N., personal communication,
May 30, 2019).
6 Meedan has since improved this feature. According to media reports, clustering has now been improved so that Check recognizes identical or similar items in the database and automatically sends them to users once they are verified (Tardáguila, 2019).
30
5.7 Sorting user requests
5.7.1 Deciding what to verify
Whereas misinformation touches all matters, from the trivial and mundane to the political,
Checkpoint targeted election related misinformation surrounding issues of national interest –
misinformation that could impact polling, law and order or be likely to incite violence. Claims
about non-election related matters e.g. rumours about Bollywood celebrities, sports or other
entertainment sectors were considered out-of-scope together with opinion related claims,
conspiracy theories and satire.
Opinionated claims are generally avoided by fact-checkers, Graves (2017) notes, because
“value-laden claims cannot be tested for their correspondence to reality” (p. 520). Political
opinions have their roots in different sets of values, and subjective in their nature, do not
amount for verification. Political opinions are based on interpretations of facts. For instance,
the following screenshot of a tweet criticizes prime minister Modi’s performance as a
politician, accusing him of having to resort to drastic measures to gain votes (see Figure 1).
Each and every claim could be verified: did he release a movie? Did he start his own channel?
But the context of these claims is highly opinionated, and their link to the performance of the
prime minister cannot be verified.
Figure 1. Screenshot of a tweet received through the tipline (Check).
31
When it was announced that the BJP had won the election, the following meme reached the
tipline (see Figure 2). Just like the tweet, it was unverifiable in nature as all claims were
opinionated.
Claims that the “Results Were Not At All Surprising” or that the biggest lesson of the election
was that the “Opposition Should Try To Improve Themselves rather Than Just Hating The
Ruling Party In Next 5 Years” were not factual claims, but claims based on values.
Figure 2. A meme received through the tipline (Check).
As previously mentioned, a majority of the queries was considered to be spam or otherwise
not suited for verification, e.g. obscene or pornographic material, sponsored content, job
advertisements, and even threats (Author’s field notes, April 5, 2019) (Standard Operating
Procedures). Any claim in a language other than the five languages covered by the project
was not dealt with (Author’s field notes, April 8, 2019) (PROTO, 2019).
Alleged violations of the Model Code of Conduct – a set of rules and guidelines regulating the
conduct of political parties to ensure fair and free elections – served as a key area of interest
to guide the team in the sorting process (Author’s field notes, April 5, 2019). As the Election
32
Commission of India announced the election schedule, the Model Code of Conduct came into
force. The rules have been agreed to by a consensus among the political parties, and those
rules prohibit conducts such as the use of political symbols in proximity to polling stations or
other means of influencing voters during polling. One historically frequent violation, for
example, is the distribution of alcohol to voters by party workers (ECI, Election Comission of
India, 2019).
On days of polling, the verification effort would be focused on the constituencies where
polling was held. Polling related issues would be prioritised e.g. alleged violations of the
Model Code of Conduct or allegations surrounding malfunctioning electronic voting
machines, long lines to voting booths, people prevented to or not being able to vote,
distribution of alcohol to voters and corruption otherwise impacting polling (Author’s field
notes, April 10, 2019). As polling in India was scheduled over seven phases, spanning over
two months, this meant that the target of the verification effort would shift between different
regions throughout the project.
The aim was to focus on unique queries and not to verify pieces that other Facebook affiliated
fact-checkers had done already, in order to not repeat their job. Current topics were to be
prioritized before obsolete matters (personal communication, Joshi, S., April 17, 2019)
(Author’s field notes, April 5, 2019).
The verification process was indicative of the task list in the Check software, meaning that
verification would be limited to items that could be verified following the outlined
verification steps. The methodology was restricted to the use of online open-source
verification tools, such as reverse image search. Verification was thus limited to queries that
could be verified using available data in the public domain e.g. official government databases,
and verified social media accounts.
Due to the legal restraints, traditional verification methods used in journalism and fact-
checking was not part of the methodology. The team never consulted experts, called up party
officials or cross-checked a claim with reporters or other sources on the ground. Queries that
demanded these measures were to be dropped. Pieces that required in-depth fact-checking
were not verified, such as news articles, speeches etcetera (Author’s field notes, April 5,
2019).
33
Each analyst started the day by overviewing the database of items, which was updated in real
time. Whenever a new request came in, it became visible for the analyst in the Check
interface. Three language specialist prioritized queries in their respective languages – Bengali,
Malayalam and Telugu – whereas the rest of the analysts dealt with queries in English and
Hindi.
To identify and select verifiable leads among the items, an analyst could cross check an item
with the Standard Operating Procedures, a document that listed a number of criteria used to
qualify an item for verification. The same document was used to filter out items that were not
suited for verification. Analysts flagged those items as out-of-scope, guided by a list of topics
that were out of scope for the purpose of verification, e.g. satire, entertainment or opinion. An
out of scope item was tagged with the corresponding category. These tags would also be
useful for research purposes, as a subsequent content analysis was conducted to map out
misinformation patterns in the amassed data.
After a lead had been selected for verification, analysts sent an item’s link in a Slack7 channel
and tagged team leaders. Team leaders reviewed the item and, if approved, assigned it to an
analyst for verification. The team leaders thus operated as editors and held the final call in
deciding if an item was to be verified or not. The team aspired to send eight unique
verification requests on a given day, not counting duplicates. They also aspired to get a
balanced output, with several languages targeted. A team leader mentioned that ideally, the
team would put out at least two verification reports per language – ten in total, not including
duplicates (Author’s field notes, April 9, 2019). On polling days in regions where a certain
language is more prevalent than another, the team would have to prioritize claims accordingly
(P. Raina, personal communication, April 10, 2019).
Although there was an outlined methodology in selecting leads, elements of subjectivity were
still allowed in this process. Sometimes exceptions were made. Each query had to be dealt
with on a case to case basis, as stressed by team leader Pamposh Raina.
“There are claims […] where things are being said about X or Y, and those are personal
opinions and attacks, so we have to take it on a case-to-case basis. Who is making those
7 Slack is a virtual collaboration hub used for communication within teams.
34
attacks? Against whom, does it even matter, is that national interest, and public interest?”
(Raina, P., personal communication, April 10, 2019).
5.8 Monitoring social media
The public debate shifts from day to day as unforeseen events occur and news stories break.
Analysts would monitor two social media platforms – Facebook and Twitter – to identify
trending topics on a given day, as well as targeting and identifying hyper local issues in
constituencies where polling took place. The idea was to use this as a method to triangulate
with WhatsApp user requests. Queries were to be prioritized according to the relevancy of an
issue on any given day (Author’s field notes, April 8, 2019).
Crowdtangle, a tool to measure performance of Facebook posts, was used to create lists of
affluent groups, pages and users from across the spectrum, some of which have been noted to
disseminate misinformation in the past (see BBC, 2018). Tweetdeck was used in a similar
way to monitor content on Twitter. Tweetdeck allowed the team to populate watchlists with
accounts belonging to political parties, third party fact-checkers, media outlets, prominent
influencers, public figures and government bodies. The lists were updated in real-time so that
a user could get a sense of trending tweets on a daily basis. Besides populating watchlists, a
set of search strings were designed to monitor tweets related to areas of interest on polling
days, such as polling related issues or allegations of breaches of the Model Code of Conduct.
In previous iterations of Pop-Up Newsroom, monitoring social media has been part of the
news gathering process. Viral claims and rumours were picked up from these channels, then
evaluated and verified. For Checkpoint, the purpose of monitoring proved not to be as clear or
practical after the project had started. We often found that there was no direct connection
between the content we were watching versus the content we were getting from the tipline. At
those rare instances when we did see a correspondence, we could streamline our verification
effort accordingly. More often, it only gave us an indication of the amount of content that did
not reach the tipline.
35
5.9 Methodology of verification
The framework of the methodology and workflows had been developed with assistance from
Dig Deeper Media, such as designing the verification task list in Check (Bell, F., personal
communication, May 23, 2019). Tweaks and changes would be done to the methodology over
time.
5.9.1 Use of official sources Shalini Joshi described the verification methodology of Checkpoint as “a very scientific and
independent process […] not depending on newsrooms, individuals or any political parties for
information” (S. Joshi, personal communication, 2019, April 17). Whereas other fact-
checkers commonly use news articles as sources to verify a claim, the Checkpoint
methodology was restricted to the use of official, primary, sources in the public domain:
public records such as press releases, government databases or verified social media accounts.
By default, news articles were not to be used to verify a claim – no matter how reputable a
news organisation. The same applied to the use of verification reports published by fact-
checkers. This was not due to a categorical distrust in media, but rather a means to keep the
verification process independent. Sometimes media reports don’t uphold verification
standards or simply are not transparent with their verification routines, as can be noted in the
rare use of photo credits by some Indian media outlets. To avoid amplifying eventual errors
committed by others, it was key for analysts to verify queries independently. One analyst told
me: “We should never trust other sources and their methodology. We should think about it
ourselves and come to a conclusion, not blindly going on already verified reports” (Author’s
field notes, April 15, 2019).
When using tweets or Facebook posts, verified, official accounts were preferred over non-
verified accounts. The same applied for verifying images. To verify an image, it had to be
cross checked with an image from an official source, since an unverified image could not be
used to verify another image. Verified accounts on Facebook and Twitter carry a blue check
mark indicating that the account is authentic. Only accounts of public interest are verified by
the platforms, e.g. journalists, politicians, political parties, corporations or NGOs.
36
When verifying a query, analysts opened the item in Check and followed a task list consisting
of eight verification steps. The steps taken would vary depending on the item. For images, the
first step would normally be to do a reverse image search to verify the authenticity of a photo,
or to decide if it was taken out of context. A reverse image search uses algorithms to locate
similar or duplicate images posted online, sometimes allowing the analyst to trace down the
original or authentic image. If the person portrayed in a photo was a public figure, analysts
could try to browse that persons official Twitter account in search of the original photo. This
method can be illustrated with the following example.
The tipline received an image showing Bollywood actress Kareena Kapoor walking on a
street, purportedly on her way to the polling station, while holding her son’s hand (see Figure
3). Her son can be seen dressed in a t-shirt with the print “NAMO AGAIN!”, a statement
indicating support for the re-election of prime minister Narendra Modi. Following standard
procedure, the image could easily be debunked. In this case, since the person purveyed on the
image was a well-known public figure, the analyst could simply cross-check the manipulated
image with the authentic photo posted on Kapoor’s Instagram handle, where no such print can
be seen.
37
Figure 3. Screenshot of a manipulated image received via the tipline. The text “NaMo again!” has been added to the boy’s t-
shirt (Check).
38
Figure 4. Screenshot of the Check verification task list. Analysts followed the task list and checked each box upon completion
of the verification step (Check).
A type of query frequently received in the tipline were screenshots of tweets. There are
several online tools with which a user can generate fake tweets with great facility. A common
strategy seems to be to misattribute a tweet to a public figure, thus attacking e.g. a politician
or political opponent. To verify screenshots of tweets, an analyst would cross check it with
39
the official handle of the attributed person. In cases where tweets had been removed from the
primary source, the tweet was unverifiable.
In a straight forward example, Checkpoint received a screenshot of a tweet, where Modi
thanked his rival Imran Khan after being congratulated on the electoral victory. The analyst
could easily verify it by searching the PM’s official Twitter account for the original tweet.
Figure 5. A screenshot of a tweet received via the tipline. The tweet could be traced to Narendra Modi’s official Twitter handle and proved to be authentic (Check).
40
Claims or statistics would be cross-checked with official sources. For example, a message,
claiming that the Modi government had introduced new legislation regarding rape victims,
was forwarded to the tipline. According to the message, a victim of rape “has the supreme
right to kill” the perpetrator without facing legal consequences, as per Indian Penal Code 233.
The claim could easily be debunked by analysts as the Indian Penal Code is accessible in the
public domain. By quick research, an analyst could conclude that clause 233 of the IPC dealt
with “offences related to counterfeiting coins”, and not regulation around special rights of
rape victims (Check).
In some cases, analysts were allowed to make exceptions to the rule of only using official
sources, when such sources could not be found. A little less than a month in to the project,
team leaders, after having discussed with Fergus Bell, decided that media reports were
sometimes necessary to use as evidence to verify a query. This was introduced after the team
noted how many of their verification reports could not be supported by official, primary,
sources.
In cases where analysts were to verify that an event took place at a certain date and location
e.g. a rally, protest, terrorist incident or military incident, they could use news stories as
historical records. This was reserved for cases when sources did not “exist anywhere else in
public data” (internal document). These cases required the use of at least two news articles
published by sources independent from each other. The use of news items was only
supplementary evidence and should not be used as the only sources to verify a claim.
In cases where an image needed to be verified, but could not be traced to a source in the
public record other than news items, analysts should refer to news agency websites or the
government Press Information Bureau, as photos posted by news agencies often carry photo
credits, as opposed to online news sites.
One day I was in the process of verifying an image, but I was not able find the original photo.
The image depicted Kanhaiya Kumar, a candidate running for the Communist Party of India
(CPI), as he was delivering a speech (see Figure 6). Behind him, a controversial map of India
can be seen, where the northern border has been distorted so that parts of Kashmir and Punjab
belong to Pakistan. The implied message seems to have been that Kumar favours Pakistan’s
claim to some of the disputed regions (Author’s field notes, April 22, 2019).
41
The image carried some evident traces of manipulation, as the blurred edges surrounding the
outline of the man in the picture suggest he has been cut out from the original background.
But I needed to find evidence to finish the verification process. A reverse image search
revealed that the image was inauthentic – the map had been photoshopped. The authentic
photo, being a stock photo, could be linked to several media reports. But since we could not
trace the image to an original source, it was unverified by our methodological standards and
we could not use it as evidence. I could not find the original photo, neither could I find any
official source other than media reports that carried the photo (Author’s field notes, April 22,
2019).
I took to Tweetdeck in an attempt to trace the original photo. Using a set of search strings I
scrolled back in time in the archives. Eventually, I managed to establish the place and date in
which the photo was clicked: September 10, 2015, during the presidential debate at
Jawaharlal Nehru University, New Delhi. Still, I could not find the original photo. Instead,
having confirmed the time and place where the photo was clicked, I managed to find several
videos of the event published on YouTube. In the videos, the candidate can be seen delivering
a speech in a tent, while he is wearing the same outfit as he does on the photo, also holding a
pen in his right hand. There was no trace of any map behind the candidate. After discussions
with team leader Pamposh Raina, we decided upon using these three crowdsourced videos as
evidence for our verdict, despite the fact that they were uploaded by unverified sources, and
proceeded to debunk the claim (Author’s field notes, April 22, 2019).
42
Figure 6. A manipulated image depicting candidate Kanhaiya Kumar (Communist Party of India, CPI) as standing in front of a distorted map (Check).
The use of official sources distinguished the verification practiced by Checkpoint to
verification as practiced by fact-checkers. Consider the differences in how Checkpoint
analysts and fact-checkers handled the following case. The tipline received a meme, attacking
the Nehru-Gandhi family by implying that the poorly funded Indian Space Research
Organisation, or ISRO, had to transport its rockets on bullock carts, while the then ruling
Gandhi family enriched themselves and partied on a plane.
The claim read: “Never forget, When ISRO was carrying their rocket’s part on Bullock Carts,
Gandhi family was celebrating birthday on a chartered plane” (see Figure 7).
43
Figure 7. Screenshot of a Facebook post. In the meme, it is argued that the Gandhi family enriched themselves whilst the ISRO was being underfunded.
The meme carried two aligned pictures supporting the claim: the top image depicted a bullock
cart drawn by a cow, allegedly carrying rocket parts for the ISRO. The bottom image showed
members of the Gandhi family, including Indira Gandhi and a young Rahul Gandhi, whom
can be seen on an airplane allegedly celebrating a birthday party.
The intent behind the meme was likely to mislead, analysts suspected, as the images seemed
to purvey two separate events not corresponding with each other. In order to reach that
verdict, the analyst tasked with the item first had to verify the involved photos to see if they
44
corresponded with the claim. The picture of the bullock cart could be traced to the official
ISRO website, which corresponded with events in 1981 when bullock carts were indeed used
to transport parts for the APPLE satellite. However, the second image of the Gandhi family
could only be traced to a news article posted by Times Now in 2018. The photo was credited
to the Twitter handle @CongressInPics – an unverified account. The article suggests the
photo was clicked in 1977 on Rahul Gandhi’s birthday, but attributed no source to the
information. Approaching verification from Checkpoint’s methodology, neither the news
article nor the Twitter handle could be used as sources, since media reports were not counted
as valid sources and the Twitter account was unverified. By the same standards, the analyst
could not conclude that the photo was taken in 1977, since no source was attributed to the
information. Since the top photo could be verified but the bottom photo could not, no
conclusion could be reached and, after discussion between team members, the query was
dropped (Author’s field notes, April 22, 2019).
A few weeks prior to Checkpoint receiving the query, the same meme went viral on
Facebook. Fact-checkers from The Quint debunked the meme, taking almost the same steps
but approaching the sources differently. The Quint established that the photo of the bullock
cart depicted events in 1981, not by referring to the photo archive of the ISRO, but to an
article published by Livemint. The article did not carry a duplicate photo of the bullock cart,
but a similar photo, credited to the ISRO, perpetually depicting the same bullock cart (The
Quint, 2019).
The fact-checkers also confirmed that the Gandhi family photo showed the family celebrating
the birthday of Rahul in 1977, by referring to the same news story that was rejected by
Checkpoint. Despite the fact that neither the photo nor the date in which it was clicked could
be verified, the story was treated as a valid source (Times Now, 2018). The fact-checkers built
their case using sources from these two media reports and concluded that the meme was
misleading, since the “dates of the ‘birthday’ photo and the ‘bullock cart’ photo in question
not match, they also have no relationship to each other” (The Quint, 2019).
Although it might be difficult to generalize, the above example might suggest two things.
There seems to be a practicality in approach to sources in fact-checking. In the mentioned
case, no clear methodology seems to have been set regarding the use of sources. The need to
reach a verdict seems to have been prioritized on the cost of quality sources to reach this end.
45
Secondly, fact-checkers could be inclined to trust other actors within the media landscape.
The authority of a media organisation was prioritized before the actual information purveyed
by that actor. The image of the Gandhi family, originating from an unverified Twitter
account, was considered as a valid piece of evidence as it was used to by a news outlet.
5.9.2 A deviation from methodology
Some cases demanded that analysts deviated from the methodology, although those cases
were supposed to be kept to a minimum. One of the most frequently verified queries was an
image purportedly depicting Wing Commander Abhinandan Varthaman wearing a saffron
scarf with the BJP lotus symbol. Abhinandan had allegedly just cast his vote for the BJP
(Author’s field notes, April 15, 2019).
The man had risen to fame in the aftermath of the Pulwama terrorist attack, which saw
escalated tensions in relations between India and Pakistan. On February 27, Abhinandan was
part of an Indian sortie mission, flying over Pakistani territory to intercept terrorist activity.
The fighter pilot became involved in a dogfight with his Pakistani counterpart, and was struck
down when his plane got hit by a missile. Abhinandan ejected from the plane and landed
safely on the ground, only to be captured by the Pakistani Armed Forces. For three days he
was held hostage before his release. He returned to India widely praised as a hero, seen as a
symbol of courage and saffron8 patriotism.
The image was largely amplified by pro-BJP accounts on Facebook and Twitter. Third party
fact-checkers debunked the claim, concluding that the man on the picture was in fact not
Abhinandan, but a look-alike sporting the same distinct handlebar moustache as the real man.
By comparing the photo of the look-alike with an original photo of Abhinandan, fact-checkers
pointed at several differences in the men’s facial features, such as nose size and moles. They
also hinted to the fact that Indian Air Force personnel is barred from political participation
under the Manual of Air Force Law, and found it unlikely that the Wing Commander would
violate that decree (Alt News, 2019) (Usha, 2019).
8 The saffron colour is associated with Hinduism.
46
When the photo found its way to the tipline, an analyst at Checkpoint refrained from verifying
it by analysing facial features. According to the analyst, facial analysis was not in line with
methodological standards and its use was not scientifically rigorous enough. The analyst
pursued other verification measures to reach a verdict (Author’s field notes, April 15, 2019).
However, neither the IAF nor the Wing Commander himself had denounced the allegations,
that the man had just voted for the BJP, on their official channels. Thus, it could not be
verified by referring to official sources. The analyst proceeded to check the electoral roll to
find out if Abhinandan could have possibly cast his vote, as was alleged. The analyst wanted
to know if Abhinandan was registered to vote in his home state Tamil Nadu. Since the claim
reached the tipline on April 15 and voting was to take place in Tamil Nadu on April 18, the
analyst could then effectively debunk the claim. However, the National Voters’ Service
Portal, an online database where one can access the electoral roll, did not render any search
results. The analyst was left with little hard evidence to debunk the claim and by Checkpoints
standards, the item was inconclusive (Author’s field notes, April 15, 2019).
Identical claims were continuously shared to the tipline. Given the number of queries
regarding the matter, and the symbolic nature of the man involved in the allegation, team
leaders felt pressured to act. A discussion emerged between analysts and team leaders on how
to go forward with verification. Everyone involved agreed that the photo was a fake, but
disagreed on how to debunk it. Fergus Bell suggested that the team should find an official
photo, posted by the IAF, and do a facial analysis themselves. However, no such photo could
be located. Team leaders decided to make an exception, and go about verification in the same
way as the third party fact-checkers. The analyst, somewhat hesitatingly, did this by referring
to the photo of an article published by News 18, showing a picture of Abhinandan as he was
released from custody by Pakistani authorities (Author’s field notes, April 16, 2019).
47
Figure 8. A photo of Abhinandan’s doppelganger (Check).
5.9.3 Setting a verdict
After having verified a query, the analysts wrote a short description, explaining the case and
how a conclusion was reached. In a separate box, the analysts provided all links to the
evidence used to verify the item for transparency reasons. A user could thus follow the
verification process and revisit the evidence supplied. A vital part of the verification report
was to be transparent with the methodology and sources used the reach a verdict. “Anybody
who is familiar with these tools and techniques can use the process, follow the steps and be
able to generate a verification report” (Joshi, S., personal communication, April 17, 2019).
The use of transparency is in line with fact-checker’s routines. Graves (2013) found that fact-
checkers often claim transparency to be a crucial part of their work. Transparency, as Graves
argues, “qualifies as a new objectivity” since it does not deny some of the biases at play in the
human psyche, but works as a counterweight by allowing audiences insight into their work (p.
48
68). It builds trust among audiences while simultaneously allowing fact-checkers to be more
persuasive, and defend themselves to critics (p. 179). By being transparent with the use of
sources, anyone can follow the steps taken to verify a claim, simulating the notion of
replicability in science (p. 179).
The final step of each verification process, after a verification report had been reviewed and
confirmed by team leaders, was to rate the report card with a verdict. The following rating
scale was applied.
TRUE: the item, based on the evidence applied in the report card (and no other information),
could be considered true by anyone who would follow the steps taken to reach the verdict.
FALSE: the item, based on the evidence applied in the report card (and no other information),
could be considered false by anyone who would follow the steps taken to reach the verdict.
MISLEADING: the item, based on the evidence applied in the report card (and no other
information), could be considered misleading by anyone who would follow the steps taken to
reach the verdict. Misleading that the information could be true, but it is taken out of context
or skewed in a manner to mislead.
DISPUTED: a verdict could not be reached as there is different sources of equally valid
information that both debunk and verify the item.
INCONCLUSIVE: a verdict could not be reached as there is insufficient evidence to verify
or debunk an item.
(Standard Operations Procedure, internal document).
As seen here, the assessment of truthfulness was a binary task – either something is true or it
is false. This can be practical when dealing with a single claim or the authenticity of a photo –
either it is authentic or inauthentic, manipulated or genuine. However, analysts experienced
that the limitations of such a binary rating system would sometimes pose limitations on their
work. Memes, for example, often carry several elements that combines textual claims and
images. Some of these claims might be true, whereas others might be misleading or false.
Some pictures might be authentic, but others inauthentic. In these instances, a holistic analysis
would have to be made to reach a verdict of a piece. Team members were therefore told to
avoid such pieces, as they required more in depth fact-checking to verify.
49
Fact-checkers, like American fact-checker Politifact, uses different rating scales of a non-
binary type with verdicts such as “True”, “Mostly True”, “Half True”, “Mostly False”,
“False” or the worst rating: “Pants on Fire” (Politifact, 2018). Given the nature of in-depth
fact-checking, it is often the case that a fact-checked piece slides somewhere in this scale, as
presidential speeches, for example, makes many claims and rarely gets all of them 100 %
correct. A verdict can therefore arrive at nuances of truth.
The analysts at times expressed indecisiveness to the efficacy of the grading scale adopted by
the project. Shalini Joshi saw it fit to consider using a similar model to the one used by
Politifact for future verification projects since “there’s more scope of saying that this is not
absolutely true or absolutely false… and there’s scope for the user to know that there’s more
to this than just true or false” (Joshi, S,. personal communication, May 28, 2019).
5.10 Evaluation
As the elections were over, and along with it the verification phase, the team had pushed out
512 verification reports, counting items marked as true, false, misleading and inconclusive
(Check). This number included duplicate items. Overall, on the last day of the verification
phase, the tipline had received some 79,000 queries (ibid.).
As previously mentioned, the majority of these items was out of scope for verification, since
they were either not related to the general election or were unverifiable according to
Checkpoint’s methodological standards. Claims that were either opinion-laden, or demanded
fact-checking and journalistic measures to be verified, were not addressed.
Initially, some analysts were outspoken in their critique of the verification effort in terms of
efficiency, or lack thereof. Two weeks after Checkpoint was launched, an analyst told me:
We are actually doing nothing. It is very inefficient… you know the
population of India, you know how many items that come in every day…
and we’re [verifying and] sending seven items [on a daily average]? So on
the first day when this project was launched, the number [of items we had
50
received] was 25,000 or something like that9 […] So imagine how
disappointed the people are (Participant 7, personal communication, April
17, 2019).
However, since Checkpoint was mainly a research project, success, as defined by the mission
statement, would not primarily be measured in terms of how many queries were sent out, but
rather in the findings of the research. The primary function of the verification process was not
that of a service to the public. Its function was to encourage WhatsApp users to participate as
“listening posts” by submitting claims and messages to the tipline (Proto, 2019). As clarified
by Shalini Joshi: “People would not send us queries if we would not send out verification
reports back” (S. Joshi, personal communication, April 17, 2019).
Yet, as the first two months were dedicated to verification, it is worth examining how this
effort was experienced by its participants.
More than a lack of efficiency, analysts experienced inconsistency and lack of clarity in the
project. Although there were some guidelines in place for determining what queries to
address, analysts felt that the guidelines were not enough. According to one analyst, there was
no “clarity in the items that we were going to address, the items that we would not address,
what are the ground rules that we have to follow… We were doing things blindly and
haphazardly” (Participant 7, personal communication, April 17, 2019).
Daily discussions revolved around “the fine line of data verification and fact-checking”, as
one analyst said (May 29, 2019). “There is always confusion about what is verification and
what is fact-checking”, said another (Participant 6, personal communication, May 27, 2019).
In a nutshell, all queries that could not be verified using online verification tools and official
sources were out of scope. Some claims could not be verified by the use of these online
verification tools, but needed more in-depth investigative measures to be verified, such as
fact-checking. But the line that separated verification from fact-checking was not always clear
to team members. Since most analysts were from a journalistic background, they might have
had a journalist’s approach to verification.
9 The exact number was 24,916 queries between April 2 and April 3 (Check).
51
5.10.1 A gradually improved verification process
While analysts agreed that the verification phase had been marked with confusion initially,
they felt that as they were wrapping up the verification effort they had more clarity in the
process.
When we kind of started to work properly it was the end, we had reached
the end. Had it been the case that we […] would have taken these measures
from the beginning, the results could have been much different and we
could have reached out to more people and we could have addressed much
more items (Participant 7, personal communication, May, 2019).
However, not having all the guidelines in place was an inherent part of the concept behind the
project, according to Joshi. Although The Pop-Up Newsroom concept builds on insights from
previous experiences, it seeks to start each project on a blank slate. Since each context is
unique, there is no one size fits all model for a framework that can be directly exported to and
implemented in every context.
The team had to adjust the basic framework to the Indian election context, and find a viable
solution to work within the legal restrictions that prevented them from producing a
journalistic output.
As the team tested the foundational framework they noticed what worked and what did not
work, and they gradually improved the process accordingly. For instance, a month into the
project, team members and team leaders put together a document which clarified some of the
confusion regarding what sources to use. The changes allowed the team to address some
claims by using links from media reports as historical evidence when no official sources were
available.
You start from scratch and then you start building those blocks along the
way depending on your own context and the kind of queries that you’re
getting. And then, by the end of the project, you have something solid in
place […]
52
I feel like now if I’d have to do the project [again], I’ll be better prepared.
I’ll have the methodology in place, I’ll have a template in place… I’ll be
able to tell the team this is not what you should be addressing – this is
definitely what we should be addressing. But this, in the beginning, it
wasn’t so clear (Joshi, S., personal communication, May 28, 2019).
5.10.2 Limitations of online verification tools
The team members felt restricted by only using online verification tools. Some thought that
the journalistic skillset possessed by team members should have been used more effectively.
One team member noted the need for more in depth analysis, commonly practiced by fact-
checkers, as many of the incoming claims could not simply be verified by tracing a primary
source.
Actually to be very frank, I think this project has its handicaps. The set of
tools that we have, they’re not fool proof. And we need to put our own
knowledge to actually distinguish fake news coming in. I don’t think the
tools themselves is enough to deal with all kind of stuff that we’re getting,
we need some strong knowledge: election knowledge, political knowledge
(Participant 4, personal communication, May 1, 2019).
Team members also felt that being restricted to official sources hampered them in their work.
Several analysts raised the need for going beyond official sources, since not all claims can be
cross-checked with official statements or public records. “We cannot always expect
something to come out officially”, one analyst noted (Participant 4, personal communication,
May 1, 2019). Another team member told me: “Check, I think, is built on the premise that
things can be verified because we have means of verifying them. I would argue that in this
country half of our things are not digitised, [which means] we don’t have the means of
verifying” (Participant 9, personal communication, May 1, 2019).
In the verification process, the thing is we don’t have a proper database to
cross-reference things… in Europe or the USA they might be having a huge
53
database. As for us, for example, if it is criminal records, we don’t have that
organized or centralised in a database. We cannot cross-reference such
claims. We’ll have to dig it out from various departments and cross-
reference it with them, but we don’t have that kind of thing… that’s the
primary drawback we face in India (Participant 8, personal communication,
May 1, 2019).
As a complement to official sources, analysts felt a need to cross-check claims with sources
on the ground. “They could actually use us if they had the right intentions because we have
the sources on the field, on the ground. But they don’t allow us to verify by using them.”
(Participant 7, personal communication, May 1, 2019).
In retrospect, Shalini Joshi agreed that not being able to use journalistic or fact-checking
measures limited them in their work. “I think it has been quite challenging to focus on using
just a scientific process to verification and not going into any kind of fact-checking.
Sometimes it felt like there was a lot more we could do.”
But as the verification process became clearer for everyone involved, she also noted the
advantages of the methodology. “At times it also felt like this is a very objective way of
addressing a query and this should be convincing to the end user“. She also pointed to the fact
that while fact-checkers might be publishing one or two reports a day, Checkpoint could,
despite the limitations, be more efficient in terms of “volume of reports pushed out” and
number of “queries addressed”.
Ritvvij Parrikh noted that verification allowed the team to produce more verification reports
whereas fact-checking is a process that needs more time and hence produces fewer reports.
The fact that we were focusing on verification, was a strength. Had we gone
down to fact-check, it would have taken a lot more time. So verification
allows us to go broad, in terms of number of queries. Fact-checking would
have allowed us to take on and go deep with it. And the kind of research
project, the problem statement [to examine] what is happening inside of
54
WhatsApp, verification was the right approach (Parrikh, R., personal
communication, May 30, 2019).
One insight that Joshi shared is that she feels that for a future verification project in India,
there is a need to work more with visual verification reports.
I also feel that we should use more visuals when working in a project like
this […] for a lot of WhatsApp users, literacy is a barrier. So if you get a
report that is more visual and not so much text it makes more sense. And
that is also something that they can forward more easily in India. Nobody
likes to read that much. And I guess that’s the nature of closed messaging
apps like WhatsApp. People don’t read but they just forward… so if it’s
more visual than textual then that’s also useful for the user. So going
forward in another project I’d say we should use more visuals than text
(Joshi, S., personal communication, May 28, 2019).
5.10.3 Lack of clarity in the research process
For being a research project, a lot of time and resources were focused on the verification
effort. Throughout the election campaign, relatively little time was dedicated to research. In
fact, no data specialists were involved in the project. Most team members had a background
in journalism as opposed to research. One analyst noted:
Really what this was, was a way to study misinformation. And in this case,
what we should have had were more data scientists on board and not really
journalists. Because if output was not important, and resources were scant,
more effort should have been put into just collecting the data (Participant 9,
personal communication, May 29, 2019).
Another analyst shared the concerns. “I was surprised that all of them [the analysts] were
journalists. Because I was thinking [since] this is a research project, I thought these people
must have some background in research. But that wasn’t the case.” (Participant 4, personal
communication, May 29, 2019).
55
The reason for hiring so many journalists was due to a stringent timeline in the recruitment
process.
If we had a slightly more comfortable timeline, I think we would have
focused a lot more on people with more analytical skills and structured
thinking. Because in the end this is a project about dealing with data. This
isn’t a creative project – to us it’s not a story telling project. So the skills
that we would apply would be analysis, data structure etcetera (ul Hadi, N.,
personal communication, May 30, 2019).
The purpose of the project was to map out the misinformation ecosystem, particularly on
WhatsApp. This was to be done by conducting a content analysis of the collected data. But as
the project was initiated, there were no clear research questions outlined, and only a
preliminary coding scheme was designed to tag the collected items. It was not until the data
collection/verification phase drew to its end, that the project transitioned into the data analysis
phase, when the effort was redirected at research.
The preliminary coding scheme had to be re-designed after the election to correspond with the
newly phrased research questions, with more sophisticated tags created.
This meant that analysts effectively had to redo the tagging of most items in Check. Had the
project set out with a clear directive as to the purpose of its research and focused its effort at
handling that data, much of this extra work could possibly have been avoided, or at least
facilitated.
“The sorting practices should have been clear in the beginning and the end goal of that data
should have been clear in the beginning. Instead the emphasis was on responding [to queries]”
(Participant 9, personal communication, May 29, 2019).
Although the participants were split in their thoughts of verification, they agreed that the
research would be fruitful.
I think that the research is going to be very insightful and fruitful for all the
stakeholders in this project. The research part is the most important thing as
an outcome. I still think that, as I said earlier, the verification and fact-
56
checking is a failed process, and I really do not believe in it. I think this [the
research] is the part that is going to make a difference as it will be really
fruitful (Participant 7, personal communication, May 28, 2019).
5.10.4 Role of Facebook – too little too late?
Facebook has received a fair share of pressure regarding the spread of misinformation on its
platforms. In recent years, however, it has stepped up its efforts in the fight against
misinformation in India and abroad, as exemplified by its third party fact-checking program.
However, some analysts were critical of its role in commissioning a research project during
the election as an untimely response, with no direct bearing on the misinformation problem.
As noted by one analyst:
We know what Facebook’s role has been in different countries’ elections in
the past. We all know about it. Multiple journalists have been there on the
field, written about it, reported about it – how fake news got this wide
spread all over social media through WhatsApp and Facebook […]
My point is that if Facebook was really concerned about fake news and the
spread of fake stories – and about things that could actually instigate and
harm people, or harm communal harmony – then this should have started at
least two years ago, or one year before the elections. Because that was the
most vital time, when people out there were campaigning, and addressing
rallies and all of that. Why are they doing it right now when the damage is
done? The damage is really done, and the opinion has been built. Now you
cannot do anything […]
I think it’s a damage control strategy by Facebook […] they thought it
would be damaging for their image and so this was done in a very hurried
manner (Participant 7, personal communication, April 17, 2019).
Another analyst questioned the intentions of the corporation.
57
I think they want a bit of credibility. Because they have been trashed left,
right and centre. Some people were really critical and thought that
Facebook is not doing anything, they are just worried about their business.
So I think they are doing this for credibility, I guess. That’s the only thing
for them […]
I mean obviously India is a huge market for them. And credibility also
matters (Participant 3, personal communication, May 1, 2019).
It should be noted, however, that even if Facebook could have commissioned a research
project earlier, the current law prevents them from effectively addressing misinformation by
funding editorial fact-checking projects.
Fergus Bell underlined, despite the criticism, that the project was a step in the right direction.
Misinformation on WhatsApp is very publicly a problem in India and it’s
very easy to just blame platforms, but not many people come to them with
actual solutions to try. And we did. And I think that’s why they want to
fund it. What’s in it for them? Potentially finding out ways to address
misinformation on their platforms (Bell, F., personal communication, May
23, 2019).
Nasr ul Hadi said:
This is a problem that has direct relevancy in the business, and projects like
this help expand their thinking around what they need to enable on their
platform not just for Indian but in other ecosystems as well. So I’m pretty
sure that things that they take away from this project they will apply to
other things, other projects as well around the world […] and the American
election is a big one that they’re looking in to (ul Hadi, N., personal
communication, May 30, 2019).
58
6 Conclusion
This study set out to examine a verification initiative enrolled by project Checkpoint during
the 2019 Indian elections. Based on ethnographic field work, I have presented a detailed
account of the implementation of the verification process, as well as the challenges
encountered by the team of analysts and stakeholders at large.
By approaching Checkpoint as a case study, I hoped to shed light on the Pop-Up Newsroom
concept – a series of global, collaborative fact-checking initiatives – and how that concept
was applied during the Indian election. Checkpoint was the result of such a collaborative
effort and involved stakeholders from different countries and disciplines.
The verification initiative was based on a framework in which user generated content was
crowdsourced from WhatsApp. With technological assistance from Meedan, Checkpoint
piloted a WhatsApp tipline, to which users could submit claims and rumours they encountered
on the platform. A team of analysts then verified or debunked those claims and sent back
verification reports to users who initiated the verification requests. The tipline facilitated
interaction with WhatsApp users and allowed some level of automation in the verification
process as queries, after verification, automatically prompted the distribution of verification
report cards to users.
However, launching a tipline meant that the team was immediately flooded with thousands of
queries. Many of those queries were duplicates, but there was no identification mechanism in
the interface that could cluster these queries. In effect, analysts had to take to the laborious
task of manually clustering similar or identical queries before Check could distribute the
verification reports to users.
If tiplines are to be a viable solution in the fight against misinformation, more sophisticated
technology will be needed to improve clustering of identical queries. This could possibly be
done by integrating (already existing) technology, like image match check, with the interface.
If the interface could identify and match newly submitted queries with already verified
queries in the database, and then automatically distribute a verification report card, such a
development could effectively improve responsiveness from an end user’s perspective. For
usage in other countries, Check would also need to integrate with other messaging networks,
59
as internet users of some countries rely on other messaging apps than WhatsApp for
communication.
This study has also shown that while technology is a vital step in the fight against
misinformation, it offers no quick fix. Open source verification tools proved to be particularly
effective when dealing with user generated content, but it is often restricted to simpler
verification measures such as tracing the origins of an image or the authenticity of a tweet.
These verification tools need to be complemented by in-depth investigation, as commonly
practiced by fact-checkers. Traditional journalistic methods are still vital to the craft, be this
by consulting experts or cross-checking claims with sources “on the ground”. Analysts
suggested that this might be even more important in the Indian context, where accessibility to
public records from government databases is scarce.
There were several aspects that this case study did not address. Professional fact-checkers
today rely heavily on funding from third parties, which poses a question of the sustainability
of the trade. More research can be done on this field, to explore viable business models for the
profession. Fact-checking is a rapidly evolving field and much has happened since the first
dedicated fact-checker saw the light in 2003. The global fact-checking movement has seen the
conception of several collaborative fact-checking initiatives. Such collaborative models are
continuously tested and introduced around the world. Another such fact-checking initiative
will be introduced by Pop-Up Newsroom during the US 2020 election. This line of
collaborative projects, as exemplified by Pop-Up Newsroom, invites for more research on
how such fact-checking initiatives can contribute to the media ecosystem in the fight against
misinformation.
60
References
Allcott, H. & Gentzkow, M., 2017. Social Media and Fake News in the 2016 Election.
Journal of Economic Perspectives, Volym 31, Number 2, pp. 211-236.
Alt News, 2019. Wing Commander Abhinandan Varthaman voted for BJP? No, he’s a
lookalike. [Online] Available at: https://www.altnews.in/wing-commander-abhinandan-
varthaman-voted-for-bjp-no-its-a-lookalike/ [Accessed 23 August 2019].
American Dialect Society, 2018. “Fake news” is 2017 American Dialect Society word of the
year. [Online] Available at: https://www.americandialect.org/fake-news-is-2017-american-
dialect-society-word-of-the-year [Accessed 23 August 2019].
Anand Choudhury, A., 2019. Didi meme: SC frees BJP youth neta, directs her to apologise.
[Online] Available at: https://timesofindia.indiatimes.com/elections/lok-sabha-elections-
2019/west-bengal/sc-grants-bail-to-bjp-leader-directs-her-to-apologise-for-sharing-mamata-
meme-on-fb/articleshow/69324236.cms [Accessed 23 August 2019].
Aneez, Z., Neyazi, T. A., Kalogeropoulos, A. & Nielsen, R. K., 2019. Reuters Institute India
Digital News Report, u.o.: Reuters Institute for the Study of Journalism.
Anon., u.d.
BBC, 2017. User-generated content and the UGC hub. [Online] Available at:
https://www.bbc.co.uk/academy/en/articles/art20150922112641140 [Accessed 23 August
2019].
BBC (2018) DUTY, IDENTITY, CREDIBILITY: Fake news and the ordinary citizen in
India. Available: https://www.bbc.co.uk/mediacentre/latestnews/2018/bbc-beyond-fake-news-
research
61
Bhardwaj, A (2017) How BJP’s IT Cell Waged War And Won In UP. Newslaundry.
Available: https://www.newslaundry.com/2017/03/17/how-bjps-it-cell-waged-war-and-won-
in-up [Accessed 23 August 2019].
Bhargava, Y., 2019. WhatsApp launches ‘tipline’ to tackle rumours ahead of elections.
[Online] Available at: https://www.thehindu.com/business/whatsapp-launches-tipline-to-
tackle-rumours-ahead-of-elections/article26710050.ece [Accessed 23 August 2019].
Bloomberg (2018). A Global Guide to State Sponsored Trolling. Available:
https://www.bloomberg.com/features/2018-government-sponsored-cyber-militia-cookbook/
[Accessed 23 August 2019].
Cable (2019) Worldwide mobile data pricing: The cost of 1GB of mobile data in 230
countries. Available: https://www.cable.co.uk/mobiles/worldwide-data-pricing/ [Accessed 23
August 2019].
Campbell-Smith, U & Bradshaw, S (2019) Global Cyber Troops Country Profile: India.
Oxford Internet Institute, University of Oxford. Available: https://comprop.oii.ox.ac.uk/wp-
content/uploads/sites/93/2019/05/India-Profile.pdf [Accessed 23 August 2019].
Carlsen, M., 2019. Money Control. [Online] Available at:
https://www.moneycontrol.com/news/trends/whatsapp-takes-on-fake-news-rolls-out-
frequently-forwarded-feature-in-india-4277281.html [Accessed 23 August 2019].
Chadha, K & Guha, P (2016) The Bharatiya Janata Party’s Online Campaign and Citizen
Involvement in India’s 2014 Election. International Journal of Communication 10, pp. 4389–
4406.
Chaudhuri, P. & Jha, P., 2019. 3 out of Facebook's 7 fact-checking partners have shared
misinformation post-Pulwama. [Online] Available at: https://www.altnews.in/3-out-of-
facebooks-7-fact-checking-partners-have-shared-misinformation-post-pulwama/ [Accessed 23
August 2019].
62
Gleicher, N (2019) Removing Coordinated Inauthentic Behavior and Spam From India and
Pakistan. Facebook. Available: https://newsroom.fb.com/news/2019/04/cib-and-spam-from-
india-pakistan/ [Accessed 23 August 2019].
Cottle, S (2009) Chapter 3. Participant Observation: Researching News Production. In: Cottle,
S., Newbold, C., Hansen, A., Negrine, R. Sage. Mass Communication Research Methods.
Available:https://www.researchgate.net/publication/268519876_Participant_Observation_Res
earching_News_Production [Accessed 23 August 2019].
ECI, Election Comission of India, 2019. GENERAL ELECTION 2019. [Online] Available at:
https://eci.gov.in/general-election/general-elections-2019/ [Accessed 23 August 2019].
ECI, Election Comission of India, 2019. Model Code of Conduct for the Guidance of Political
Parties and Candidates. [Online] Available at: https://eci.gov.in/mcc/ [Accessed 23 August
2019].
Electionland, 2016. Electionland Case Study 2016, u.o.: u.n.
Elliott, J. K., 2018. India WhatsApp killings: Why mobs are lynching outsiders over fake
videos. [Online] Available at: https://globalnews.ca/news/4333499/india-whatsapp-lynchings-
child-kidnappers-fake-news/ [Accessed 23 August 2019].
Facebook, u.d. Fact-checking on Facebook: What publishers should know. [Online] Available
at: https://www.facebook.com/help/publisher/182222309230722 [Accessed 23 August 2019].
Facebook, u.d. WhatsApp Business API. [Online] Available at:
https://developers.facebook.com/docs/whatsapp/ [Accessed 23 August 2019].
Farooq, G (2018) Politics of Fake News: How WhatsApp Became a Potent Propaganda Tool
in India. Media Watch 9 (1) pp. 106-117.
63
Funke, D., Tardáguila, C. & Benkelman, S., 2019. Misinformation doesn’t need a free and
open internet to spread. Just look at Kashmir and Hong Kong.. [Online] Available at:
https://www.poynter.org/fact-checking/2019/misinformation-doesnt-need-a-free-and-open-
internet-to-spread-just-look-at-kashmir-and-hong-kong/ [Accessed 23 August 2019].
Ganjoo, S., 2019. WhatsApp launches tip line to tackle fake news in India ahead of Lok Sabha
Elections 2019. [Online] Available at:
https://www.indiatoday.in/technology/news/story/whatsapp-launches-tip-line-to-tackle-fake-
news-ahead-of-lok-sabha-elections-2019-1492024-2019-04-02 [Accessed 23 August 2019].
Graves, L (2013) Deciding What’s True: Fact-Checking Journalism and the New Ecology of
News. Columbia University.
Graves, L., 2017. Anatomy of a Fact Check: Objective Practice and the Contested
Epistemology of Fact Checking. Communication, Culture & Critique, Volym 10, pp. 518-
537.
Graves, L (2018) Boundaries Not Drawn. Journalism Studies, 2018 Vol. 19, No. 5., pp. 613-
631. Available: https://doi.org/10.1080/1461670X.2016.1196602 [Accessed 23 August 2019].
Haskins, C., 2019. WhatsApp Launches a Tip Line for Misinformation in India Ahead of
Elections. [Online] Available at: https://www.vice.com/en_us/article/eveeyk/whatsapp-
launches-a-tip-line-for-misinformation-in-india-ahead-of-elections
ICFJ, u.d. Knight Fellowships – Overview. [Online] Available at: https://www.icfj.org/our-
work/knight/icfj-knight-fellowships-overview [Accessed 23 August 2019].
India Spend, 2018. Child-lifting rumours caused 69 mob attacks, 33 deaths in last 18 months.
[Online] Available at: https://www.business-standard.com/article/current-affairs/69-mob-
attacks-on-child-lifting-rumours-since-jan-17-only-one-before-that-118070900081_1.html
[Accessed 23 August 2019].
64
Iyengar, R (2019) WhatsApp is getting ready for the world's biggest election. CNN.
Available: https://edition.cnn.com/2019/02/06/tech/whatsapp-abuse-india-
elections/index.html [Accessed 23 August 2019].
Jawed, S (2017) Top fake news stories circulated by Indian media in 2017. Alt-News.
Available: https://www.altnews.in/top-fake-news-stories-circulated-indian-media-2017/
[Accessed 23 August 2019].
Johari, A (2019) BJP worker arrested for Mamata meme is free. What happens to others
targeted for social media posts? Scroll. Available: https://scroll.in/article/923845/bjp-worker-
arrested-for-mamata-meme-is-free-what-happens-to-others-targeted-for-social-media-posts
[Accessed 23 August 2019].
Justice, M. o. L. a., 2010. The Foreign Contribution Regulation Act, FCRA. [Online]
Available at: https://fcraonline.nic.in/home/PDF_Doc/FC-RegulationAct-2010-C.pdf
[Accessed 23 August 2019].
Kaur, A., 2019. Fact Check: Was Gandhi family celebrating birthday as scientists used
bicycle to carry rocket parts?. [Online] Available at: https://www.indiatoday.in/fact-
check/story/gandhi-family-birthday-scientists-bicycle-rocket-photo-fact-check-1489639-
2019-03-29 [Accessed 23 August 2019].
Kaur, K & Nair, S (2018) India. In: Kaur, K., Nair, S., Kwok, Y., Kajimoto, M., Chua, Y.,
Labiste, M., Soon, C., Jo, H., Lin, L., Thanh, L., Kruger, A. Information Disorder in Asia and
the Pacific. Journalism and Media Studies Centre, The University of Hong Kong., pp. 2-8.
Kovach, B. & Rosenstiel, T., 2014. The Elements of Journalism. New York: Three Rivers
Press.
Kvale, S. & Brinkmann, S., 2014. Den kvalitativa forskningsintervjun. Lund:
Studentlitteratur.
65
Live Mint (2019) Thanks to Reliance Jio, mobile data rates are cheapest in India: Report.
Available: https://www.livemint.com/industry/telecom/thanks-to-reliance-jio-mobile-data-
rates-are-cheapest-in-india-report-1551851856460.html [Accessed 23 August 2019].
Lyons, T., 2018. Hard Questions: How Is Facebook’s Fact-Checking Program Working?.
[Online] Available at: https://newsroom.fb.com/news/2018/06/hard-questions-fact-checking/
[Accessed 23 August 2019].
Mac, R. & Dixit, P., 2019. WhatsApp's New Tip Line Is Apparently "Not A Helpline” For
Fake News At All. [Online] Available at:
https://www.buzzfeednews.com/article/ryanmac/whatsapp-fake-news-tip-line-indian-election-
not-helpline [Accessed 23 August 2019].
Mantzarlis, A., 2015. Will verification kill fact-checking?. [Online] Available at:
https://www.poynter.org/fact-checking/2015/will-verification-kill-fact-checking/ [Accessed
23 August 2019].
Marrelli, M., 2019. Introducing Election Exchange by Pop-Up Newsroom: Our largest, most
ambitious project yet. [Online] Available at: https://medium.com/popupnews/introducing-
election-exchange-by-pop-up-newsroom-our-largest-most-ambitious-project-yet-
f3fe7d4a04df [Accessed 23 August 2019].
Mattsson, A. & Parthasarathi, V., 2018. A pop-up newsroom to fight fake news: a view from
Swedish elections. [Online] Available at: https://theconversation.com/a-pop-up-newsroom-to-
fight-fake-news-a-view-from-swedish-elections-103107 [Accessed 23 August 2019].
Meedan, 2019. Press Release: New WhatsApp tip line launched to understand and respond to
misinformation during elections in India. [Online] Available at:
https://medium.com/@meedan/press-release-new-whatsapp-tip-line-launched-to-understand-
and-respond-to-misinformation-during-f4fce616adf4 [Accessed 23 August 2019].
Owen, L. H., 2018. WhatsApp is a black box for fake news. Verificado 2018 is making real
progress fixing that.. [Online] Available at: https://www.niemanlab.org/2018/06/whatsapp-is-
66
a-black-box-for-fake-news-verificado-2018-is-making-real-progress-fixing-that/ [Accessed 23
August 2019].
Patel, J & Chaudhuri, P (2019) ‘The India Eye’ – The Fake News Factory Promoted by NaMo
App. The Wire. Available: https://thewire.in/media/the-indian-eye-fake-news-factory-namo-
app-silver-touch [Accessed 23 August 2019].
Perrigo, B (2019) How Volunteers for India's Ruling Party Are Using WhatsApp to Fuel Fake
News Ahead of Elections. Time. Available: https://time.com/5512032/whatsapp-india-
election-2019/ [Accessed 23 August 2019].
Politifact, 2018. The Principles of the Truth-O-Meter: PolitiFact’s methodology for
independent fact-checking. [Online] Available at: https://www.politifact.com/truth-o-
meter/article/2018/feb/12/principles-truth-o-meter-politifacts-methodology-i/#Truth-O-
Meter%20ratings [Accessed 23 August 2019].
PIB(a) (July 3, 2018) WhatsApp warned for abuse of their platform. Press Information
Bureau, Government of India, Ministry of Electronics & IT. Available:
http://pib.gov.in/newsite/PrintRelease.aspx?relid=180364 [Accessed 23 August 2019].
PIB(b) (July 20, 2018) Whatsapp told to find more effective solutions. Press Information
Bureau, Government of India, Ministry of Electronics & IT. Available:
http://pib.nic.in/newsite/PrintRelease.aspx?relid=180787
Pop-Up Newsroom, u.d. [Online] Available at: https://popup.news/ [Accessed 23 August
2019].
Pratima, T. P., Dubbudu, R. & Factly Media & Research, 2019. Countering Misinformation
(Fake News) In India, u.o.: Factly Media & Research.
PROTO, u.d. FAQs about Project Checkpoint. [Online] Available at:
https://www.checkpoint.pro.to/ [Accessed 23 August 2019].
67
PTI, 2019. Facebook expands fact-checking network in India, adds 5 more partners to spot
fake news. [Online] Available at: https://www.businesstoday.in/top-story/facebook-expands-
fact-checking-network-in-india-adds-5-more-partners-to-spot-fake-news/story/318468.html
[Accessed 23 August 2019].
Purnell, N., 2019. WhatsApp Adds Tip Line to Fight Misinformation in India. [Online]
Available at: https://www.wsj.com/articles/whatsapp-adds-tip-line-to-fight-misinformation-
in-india-11554200672 [Accessed 23 August 2019].
Reverso, u.d. Preguntas frecuentes. [Online] Available at: https://reversoar.com/preguntas-
frecuentes/ [Accessed 23 August 2019].
Rocha, E. & Sachin Ravikumar, S., 2019. WhatsApp launches India tip line to curb fake news
during polls. [Online] Available at: https://www.reuters.com/article/facebook-
whatsapp/whatsapp-launches-india-tip-line-to-curb-fake-news-during-polls-idUSL3N21K1G5
[Accessed 23 August 2019].
Safi, M., 2018. 'WhatsApp murders': India struggles to combat crimes linked to messaging
service. [Online] Available at: https://www.theguardian.com/world/2018/jul/03/whatsapp-
murders-india-struggles-to-combat-crimes-linked-to-messaging-service [Accessed 23 August
2019].
Satish, B (2018) How WhatsApp helped turn an Indian village into a lynch mob. BBC.
Available: https://www.bbc.com/news/world-asia-india-44856910 [Accessed 23 August
2019].
Shahbaz, A (2018) Freedom on the Net 2018. Freedom House. Available:
https://freedomhouse.org/sites/default/files/FOTN_2018_Final%20Booklet_11_1_2018.pdf
[Accessed 23 August 2019].
Smooch, u.d. [Online] Available at: https://smooch.io/ [Accessed 23 August 2019].
68
Statista (2019) Number of monthly active users on WhatsApp in India 2013-2017. Available:
https://www.statista.com/statistics/280914/monthly-active-whatsapp-users-in-india/
[Accessed 23 August 2019].
Tamul, N. I. M.-C. &. D. J., 2019. (Re)constructing Professional Journalistic Practice in
Mexico: Verificado’s Marketing of Legitimacy, Collaboration, and Pop Culture in Fact-
Checking the 2018 Elections. International Journal of Communication, Volym 13, p. 2596–
2619.
Tardáguila, C., 2019. Here comes a tool, approved by WhatsApp, to automate the distribution
of fact-checks. [Online] Available at: https://www.poynter.org/fact-checking/2019/here-
comes-a-tool-approved-by-whatsapp-to-automate-the-distribution-of-fact-checks/ [Accessed
23 August 2019].
Tardáguila, C., 2019. You can now report a suspicious Instagram post and expect a certified
U.S. fact-checker to verify it. [Online] Available at: https://www.poynter.org/fact-
checking/2019/you-can-now-report-a-suspicious-instagram-post-and-expect-a-certified-fact-
checker-to-verify-it/ [Accessed 23 August 2019].
The Economic Times, 2019. WhatsApp tipline of no use for 2019 Lok Sabha polls [Online]
Available at: https://economictimes.indiatimes.com/tech/software/whatsapp-tipline-of-no-use-
for-2019-lok-sabha-polls/articleshow/68734867.cms [Accessed 23 August 2019].
The Quint, 2019. Did Gandhis Party in Style While ISRO Used Cycles, Bullock Carts?.
[Online] Available at: https://www.thequint.com/news/webqoof/webqoof-rahul-gandhi-
birthday-isro-scientists-work-cycle-bullock-cart [Accessed 23 August 2019].
Times Now, 2018. Throwback picture of grandmother Indira Gandhi celebrating Rahul
Gandhi's birthday on a plane. [Online] Available at: https://www.timesnownews.com/the-
buzz/article/throwback-picture-of-rahul-gandhis-48th-birthday-celebration-indira-
gandhi/242627 [Accessed 23 August 2019].
Tuchman, G., 1972. Objectivity as Strategic Ritual: An Examination of Newsmen's Notions
of Objectivity. American Journal of Sociology, January, 77(4), pp. 660-679.
69
Usha, S., 2019. No, This Is Not A Photo Of Abhinandan Varthaman After Casting His Vote.
[Online] Available at: https://www.boomlive.in/no-this-is-not-a-photo-of-abhinandan-
varthaman-after-casting-his-vote/ [Accessed 23 August 2019].
Vlachos, A & Riedel, S (2014) Fact-checking: Task definition and dataset construction.
Proceedings of the ACL 2014 Workshop on Language Technologies and Computational
Social Science., pp. 18-22
WAN IFRA, T. W. A. o. N. a. N. P., 2019. World Digital Media Awards winners announced
at WNMC.19 in Glasgow.. [Online] Available at: https://wan-
ifra.cmail19.com/t/ViewEmail/d/369DD17148B2AC592540EF23F30FEDED/1D056FAB870
E75C3F990754F028F0E8F [Accessed 23 August 2019].
Wardle, C., 2018. Information Disorder: The Essential Glossary. [Online] Available at:
https://firstdraftnews.org/wp-content/uploads/2018/07/infoDisorder_glossary.pdf?x19860
[Accessed 23 August 2019].
Wardle, C. o.a., 2019. An Evaluation of the Impact of a Collaborative Journalism Project on
Brazilian Journalists and Audiences, u.o.: First Draft.
WhatsApp (2018a) More changes to forwarding. Available:
https://blog.whatsapp.com/10000647/More-changes-to-forwarding [Accessed 23 August
2019].
WhatsApp (2018b) Labeling Forwarded Messages. Available:
https://blog.whatsapp.com/10000645/Labeling-Forwarded-Messages [Accessed 23 August
2019].