A Framework to Establish a Threat Intelligence Program

73
A Framework to Establish a Threat Intelligence Program Erik Miranda Lopez Information Security, master's level (120 credits) 2021 Luleå University of Technology Department of Computer Science, Electrical and Space Engineering

Transcript of A Framework to Establish a Threat Intelligence Program

A Framework to Establish a

Threat Intelligence Program

Erik Miranda Lopez

Information Security, master's level (120 credits)

2021

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

i

ABSTRACT

Threat Intelligence (TI) is a field that has been gaining momentum as an answer to the

exponential growth in cyber-attacks and crimes experienced in recent years. The aim of TI is to

increase defender’s understanding of the threat landscape by collecting intelligence on how

attackers operate. Simply explained, defenders use TI to identify their adversaries and

comprehend their attacking methods and techniques. With this knowledge, defenders can

anticipate attackers’ moves and be one step ahead by reinforcing their infrastructure.

Although research papers and surveys have explored the applications of TI and its benefits,

there is still a lack of literature to address on how to establish a Threat Intelligence Program

(TIP). This lack of guidance means that organisations wishing to start a TIP are on their own in

this challenging task. Thus, their TIP end generating too much or irrelevant data, and in many

cases has led security professionals to ignore the intelligence provided by their TIP.

This research aims to address this gap by developing an artefact that can guide organisations in

their quest of starting their own TIP. This research followed Design Science Research (DSR)

methodology to design and develop a framework which can help organisations defining their

TI requirements and appropriately operationalising intelligence work to support different

Information Security processes. Additionally, this thesis also contributes to the research field

of Information Security by presenting a list of evaluation parameters that can be used to measure

the success of the establishment of a TIP. Three main parameters were identified: Quality of

Intelligence, which measures the value of the output produced by the TIP; Intelligence Usage,

which evaluates how the intelligence is consumed and applied; and Legal, aspects concerned

with legal requirements.

Keywords: Threat Intelligence, Threat Intelligence Program, Information Security.

ii

ACKNOWLEDGMENTS

I would like to thank and recognise my supervisor, Professor Ali Ismail Awad, who read the

multiple revisions and helped making this thesis possible. I’m extremely grateful for all the

invaluable support and guidance he provided during these months. I’d like to also acknowledge

and thank my fellow opponents Hannes Michel and Emil Christensson, who not only gave new

ideas but also pushed me to do better.

Last but not least, thanks to my wife Chae Yon who endured this long process with me, always

offering encouragement and love. Pursuing a master’s degree with a full-time job wouldn’t have

been possible without you.

iii

Table of Contents

Abstract ...................................................................................................................................i

Acknowledgements ................................................................................................................ ii

List of Figures ........................................................................................................................ v

List of Tables .......................................................................................................................... v

List of Abbreviations .............................................................................................................. vi

1. Introduction .................................................................................................................... 1

1.1. Threat Intelligence ..................................................................................................... 1

1.2. Problem Statement ..................................................................................................... 3

1.3. Research Question ..................................................................................................... 5

1.4. Research Contributions .............................................................................................. 5

1.5. Limitations ................................................................................................................ 6

1.6. Thesis Outline ............................................................................................................ 6

2. Theoretical Foundation .................................................................................................. 8

2.1. Threat Intelligence: A Trending Topic ....................................................................... 8

2.2. Threat Intelligence Lifecycle...................................................................................... 9

2.3. Intelligence Levels ................................................................................................... 10

2.4. Threat Intelligence Program ..................................................................................... 11

3. Literature Review ......................................................................................................... 17

3.1. Literature Review Process........................................................................................ 17

3.2. Establishing a Threat Intelligence Program .............................................................. 18

3.3. Evaluating the Establishment of a Threat Intelligence Program ................................ 21

3.4. Research Gap ........................................................................................................... 27

4. Research Methodology ................................................................................................. 30

4.1. Qualitative Approach ............................................................................................... 30

4.2. Research Structure ................................................................................................... 30

5. Design ............................................................................................................................ 37

5.1. Development Process ............................................................................................... 37

5.2. Final Artefact ........................................................................................................... 42

6. Implementation ............................................................................................................. 45

7. Results and Discussion .................................................................................................. 56

7.1. Ex-Ante Evaluation.................................................................................................. 56

7.2. Ex-Post Evaluation .................................................................................................. 56

7.3. Future Research

iv

8. Conclusion ..................................................................................................................... 58

References ............................................................................................................................ 60

v

List of Figures FIGURE 1: EVOLUTION OF THREAT INTELLIGENCE. ...................................................................2

FIGURE 2: LEVERAGE OF TI IN ORGANISATIONS.. .....................................................................3

FIGURE 3: TI REQUIREMENTS DEFINED IN ORGANISATIONS. ......................................................4

FIGURE 4: THREAT INTELLIGENCE AS WEB SEARCH ON GOOGLE. .............................................8

FIGURE 5: TI LIFECYCLE ..........................................................................................................9

FIGURE 6: TI LEVELS.. ........................................................................................................... 11

FIGURE 7: TI SHARING........................................................................................................... 13

FIGURE 8: SEARCH STRATEGY.. ............................................................................................. 16

FIGURE 9: TI FRAMEWORK. ................................................................................................... 20

FIGURE 10: THREAT INTELLIGENCE PROGRAM OUTPUT. ......................................................... 21

FIGURE 11: PYRAMID OF PAIN.. .............................................................................................. 22

FIGURE 12: TI SIMPLIFIED.. ................................................................................................... 23

FIGURE 13: DSR GENRES. . .................................................................................................... 30

FIGURE 14: DSR PROCESS.. ................................................................................................... 31

FIGURE 15: DESIGN PROCESS. ................................................................................................ 32

FIGURE 16: EX-ANTE & EX-POST. ......................................................................................... 33

FIGURE 17: DEVELOPMENT PROCESS...................................................................................... 36

FIGURE 18: FIRST ROUGH ARTEFACT. .................................................................................... 37

FIGURE 19: HIGH-LEVEL DRAFT. ........................................................................................... 39

FIGURE 20: HIGH LEVEL VIEW. .............................................................................................. 41

FIGURE 21: INTELLIGENCE LEVELS. ....................................................................................... 45

FIGURE 22: CORE COMPENTENCIES AND SKILLS. .................................................................... 47

FIGURE 23: KPI FOR ASSESSING RETURN ON INVESTMENT...................................................... 49

FIGURE 24: TI DATA SOURCES. .............................................................................................. 50

FIGURE 25: DATA ENRICHMENT OF AN IP. ............................................................................. 52

List of Tables

TABLE 1: TI STANDARDS ....................................................................................................... 14

TABLE 2: TI INITIATIVES ....................................................................................................... 19

TABLE 3: EVALUATION PARAMETERS .................................................................................... 24

TABLE 4: SELECTED MATERIAL FOR REVIEW ......................................................................... 26

TABLE 5: LOW-LEVEL DRAFT ................................................................................................ 39

TABLE 6: LOW LEVEL VIEW .................................................................................................. 42

TABLE 7: STAKEHOLDER EXAMPLES ....................................................................................... 46

TABLE 8: UTILISATION PHASE ............................................................................................... 50

TABLE 9: DIMENSIONS TO MEASURE DATA QUALITY ............................................................. 53

TABLE 10: EX-POST EVALUATION ......................................................................................... 55

vi

List of Abbreviations

API Application Programming Interface

CnC Command and Control

CTA Cyber Threat Alliance

CTI Cyber Threat Intelligence

DSR Design Science Research

FW Firewall

GDPR General Data Protection Regulation

IDS Intrusion Detection System

IOC Indicator of Compromise

IPS Intrusion Prevention System

IR Incident Response

IRT Incident Response Team

IS Information Security

IT Information Technology

KPI Key Performance Indicator

NIST The National Institute of Standards and Technology

OSINT Open-Source Intelligence

QOI Quality of Indicators

SIEM Security Information and Event Management

SOC Security Operations Centre

TI Threat Intelligence

TIP Threat Intelligence Program

TTP Tactics, Techniques, and Procedures

VM Vulnerability Management

TA Thematic Analysis

1

1. INTRODUCTION

This first chapter introduces the concept of Threat Intelligence (TI) and the benefits it has

brought to organisations. Additionally, the thesis and the scope of the research are defined. It

finishes by establishing the research questions, showing related works, and presenting this

thesis contribution.

1.1. Threat Intelligence

The traditional approach of Information Security (IS) against cyber-attacks has been generally

reactive: attackers strike first, and defenders react. Defenders have the drawback of being

constantly responding to the attackers’ actions [1]. A common example is the URL filtering

done by proxies. A proxy stops users from accessing sites only when these are known to be

harmful. If a malicious actor uses a site that is unknown to the proxy, then the site will not be

banned. Thus, attackers have the first-mover advantage.

As defenders increased their efforts to catch up with the attackers, so did the attackers. The

attackers relentlessly keep developing their capabilities to be one step ahead. It soon become

apparent that a new approach was needed to address this cat and mouse situation. To overcome

this, Threat Intelligence (TI) emerged. Threat Intelligence is not a novel concept as it has been

used in military strategy for a long time.

“if you know the enemy and know yourself, you need not fear the result of a

hundred battles” – Sun Tzu [2]

The well-known metaphor from the Chinese general, philosopher and military strategist Sun

Tzu already warned about the need of knowing the enemy and yourself in order to not succumb

in battle [2]. TI follows the same principle but in the context of cybersecurity. Defenders need

to know their weaknessess and understand how attackers operate [2]. TI follows the same

principle but in the context of cybersecurity. Defenders need to know their weaknessess and

understand how attackers operate.

The objective of TI is to allow organisations to increase their visibility into the ever-changing

threat landscape with the aim of detecting and preventing threats before they hit them [3]. To

achieve this, TI gathers vital intelligence about the adversaries, their methods and techniques.

The information that TI provides can aid reinforcing the defences in the IT infrastructure as

well as to provide essential intelligence to the decision-making process [4]. This proactiveness

is one of the main reasons that explain the increasing reliance of organisations on TI.

Early stages of TI focused on collecting and analysing data from internal tools like system and

network logs and events, antivirus systems, and Security Information and Event Management

(SIEM) [5],[6]. Although collecting data from internal sources is considered reactive, this

2

approach still allowed defenders to grasp some insights into the threats they were facing. The

next evolution step was the introduction of intelligence gathered from openly available sources

[5]: Open-source Intelligence (OSINT). OSINT comprises data from different sources such as

social media, Internet-Relay-Chat (IRC) and hacker forums to mention a few. This new way of

information gathering presented the possibility of taking decisions based on threats that

organisations were not previously aware of [7]. This meant that TI was no longer a reactive

discipline but a proactive one. Defenders could finally know how attackers operated before

their infrastructure was targeted.

Figure 1 depicts this evolution an iceberg metaphor. In the early stages of TI, analyst had access

to internal logs and event; thus they could see only the tip of the iceberg, which resulted in a

limited view of threat landscape. With the introduction of new and external sources like the

dark web, forums and feeds, analyst could see the underneath, which increased their overall

visibility:

Figure 1: Evolution of Threat Intelligence.

As mentioned, TI emerged to fight against the ever-growing cyber-attacks by providing vital

intelligence to the decision-making process [4]. TI can be broadly defined as a supporting tool

for IT Security areas and processes. For example, Vulnerability Management (VM) Teams use

this intelligence to help them prioritise patch installations [8],[9]. If defenders know what

exploit is trending between criminals, patching a specific vulnerability can help reducing the

chance of being compromised. On the other hand, Security Operations Centres (SOC) use TI to

tune their Security Information and Event Management (SIEM) with known attack patterns in

order to detect malicious activities [10]. Incident Response Teams also rely on TI to understand

how the attackers operate, this knowledge is ultimately applied to neutralise attacks [11],[12].

3

A Threat Intelligence Program (TIP) is a collection of processes and procedures to make Threat

Intelligence actionable within an organisation [13]. And, this research will build a framework

that can guide firms in their quest of starting their own TIP. The framework will help

organisations defining their TI requirements and appropriately operationalising intelligence

work to support different Information Security processes.

1.2. Problem Statement

TI is a growing topic not only in academia but also amongst companies, as they are quickly

adopting TI to be one step ahead of the attackers. According to a recent SANS survey [10], 72%

of the 585 responses across a variety of industries said that they use TI as resource for their

defence. This shows an increase of over 10% from their previous year’s survey. SANS survey

asked the respondents what the use of TI within the organisation was: the most valuable

function, with a 41% of participants’ votes, was the identification of Indicators of Compromise

(IoC). Secondly, intelligence of threat behaviours and adversary tactics, also known as Tactics,

Techniques, and Procedures (TTP), with a 27%. Next was attack surface identification with a

18%, followed by strategic analysis of adversaries with a 13%. Results may not be surprising

as TI is mainly used by security operation analysts, which are usually more interested in IoCs

in order to protect their infrastructure. Figure 2 summarises the findings of the survey:

However, with the current academic literature, firms wishing to start their own TIP need to look

elsewhere for guidance. Research papers and surveys have researched the applications of Threat

Intelligence and demonstrated its benefits. Also, they have provided solutions to other

challenges that TI faces; yet, how to initiate a practical Threat Intelligence Program has been

overlooked [11],[14],[15].

This lack of guidance to start a TIP is the core problem of this thesis. Firms wishing to start

their TIP are on their own as there is no direction for this task. This has led companies to use

their program in ineffective ways. One of the main consequences of not having guidance when

starting a TIP, is that organisations do not document their intelligence requirements as

demonstrated by the same SANS survey [10]. These requirements can help the organisations to

40,60%

27,30%

18%

13%

IoC TTP Attack Surface StrategicAnalysis

Figure 2: Leverage of TI in Organisations. Adapted from [10].

4

be more focused on their IT security objectives as well as to appropriately operationalise

intelligence work. Though, due to lack of direction, they fail to do this. Only 30% of

organisations have documented intelligence requirements. As depicted on figure 3, the majority

did not have any requirements and most were ad hoc; however, they did have plans to define

them in the future. Only a small minority of respondents admitted not having any plans to define

intelligence requirements.

Poor or absence definition of requirements is not the only negative consequence. In a study

conducted by Ponemon Institute [16], over 70% of the respondents admitted that they are

generating too much data to use properly. The study also discovered that 1 in 2 of security

professionals do not look at the intelligence received, and from those who look at the intel, 43%

don’t use the information for decision making. This means that less than 25% of security

professionals use the intelligence provided by their TIP.

This section has shown that the interest in TI is growing amongst companies. Firms recognise

TI’s value and are quickly adopting it to improve their defence; yet many organisations fail to

use it properly. This has led IT security professionals to ignore the intelligence they receive

from their TIP because there are flooded with irrelevant data. To overcome this, organisations

need guidance to start their own TIP. However, research papers have overlooked how to initiate

a practical Threat Intelligence Program [11],[14],[15].

One of the reasons why this topic may have been overlooked is that, despite all the conferences,

workshops, and trainings available, there is still a gap between practice and theory in

Information Security [17],[18]. Although collaboration between academia and industry brings

opportunities that would not be possible if apart, research studies still differ regarding industry’s

goals and motivations [18],[19],[20]. Another reason that may explain the gap is that Threat

Intelligence is still a novel concept. Abu et al. [21] argue that Threat Intelligence is in early

stages and it requires further research and development to make the most of its potential.

30%

37%

26%

7%Yes

No, ad hoc

No, plan to define

No, no plan

Figure 3: TI requirements defined in Organisations. Adapated from [10].

5

Currently, there is lack of guidance to start and establish a Threat Intelligence Program in

organisations. This is the core problem of this research. Therefore, this thesis will try filling this

gap by developing a framework that can aid most organisations initiating a TIP.

1.3. Research Question

As described in the previous section, there is a need for guidelines to describe how to start a

Threat Intelligence Program. Hence, the research questions to be addressed on this study are:

• How can a Threat Intelligence Program be established to support Information Security

processes?

• How can the success of the establishment process of the Threat Intelligence Program be

evaluated?

The first question is answered with a framework to present and explain information for the

realisation of a defined goal [22]. In this thesis, this means providing foundation on how a

Threat Intelligence Program can be established to accommodate Information Security processes

in most organisations. Thus, the objective of this project is to build a framework to guide

organisations in starting a practical TIP.

For the second question, evaluation parameters that can be used to measure the success of the

TIP are collected. Literature review explores what makes a TIP successful and presents the

findings.

1.4. Research Contributions

The focus of this thesis was to develop a framework to aid in the initiation and establishment

of a practical Threat Intelligence Program that could be used on most organisation to support

their Information Security processes. The contributions of this research have been as follows:

➢ Provided a literature review of the current works in Threat Intelligence.

➢ Identified lack of guidance to start and establish a Threat Intelligence Program in the

literature.

➢ Developed an artefact to provide foundation on the establishment of a Threat

Intelligence Program within most organisations.

➢ The artefact seeks for continuous improvement of the TIP by adding self-evaluation

mechanisms. Additionally, the artefact is generic so it can be used on most

organisations; it is comprehensive by providing both key actions and further details; and

it has IT awareness capabilities to integrate the TIP with existing technologies and

processes.

➢ The artefact makes contribution to the business environment by addressing a relevant

problem faced by organisations.

6

➢ Presented a list of evaluation parameters that can be used to measure the success of the

establishment of a Threat Intelligence Program.

1.5. Limitations Developing a framework that fits all types of organisations is a challenging task, and even more

within the time frame of a master’s thesis. To reduce the complexity of the task and be able to

meet the deadline of the research, the developed framework will be reviewed by one firm. Due

to privacy concerns, the firm name won’t be disclosed and will be referred to as “Capsule Corp”

as placeholder in this document.

The test organisation is in the engineering business and it’s one of the leading firms in its sector.

Most of the IT infrastructure is managed internally and provides support to the business, which

is comprised of (approx.) 60K users and a few thousand servers. Management wants to have a

Threat Intelligence Program that can help the overall security of the organisation by providing

intelligence to the information security officers, Security Operation Centre (SOC), and other IT

teams. Although the framework will be not be tested but only reviewed by the organisation, the

aim is to make it generic enough to accommodate Information Security processes of most

organisations.

1.6. Thesis Outline

The rest of this research work is structured as follows:

o Chapter 1: Introduction – introduces Threat Intelligence and its benefits. The chapter

also defines the scope of the research, the motivation of the study along with the problem

description and research questions. It ends by presenting the thesis contributions and its

limitations.

o Chapter 2: Theoretical Foundation - presents the concept of Threat Intelligence in deep

to establish a foundation for the thesis. The chapter describes what Threat Intelligence

is, its lifecycle, existing intelligence levels, and the challenges that Threat Intelligence

Programs face.

o Chapter 3: Literature Review – explains the literature review process and search

procedure. Additionally, the chapter explores related work concerning the problem that

this thesis addresses. The second research question is answered here too.

o Chapter 4: Methodology - describes the research methodology applied in this thesis. It

covers how the research is to be done and justifies the chosen methodology. In addition,

an introduction to Design Science Research (DSR), the selected methodology is

provided too.

o Chapter 5: Design – contains the details of the design process of the artefact. The chapter

also describes the iteration phases required to build the artefact of this thesis. Each

iteration was followed by an evaluation to redesign and improve the artefact. The

chapter also shows the final artefact, is a framework that can be used to initialise and

establish a TIP in any organisation.

7

o Chapter 6: Implementation – describes how the final artefact is implemented. Details

are provided to describe what needs to be done to apply the artefact and establish a

Threat Intelligence Program.

o Chapter 7: Results and Discussion – evaluates the research itself and the artefact. This

chapter validates that the research followed a disciplined approach as well as evaluates

the artefact build in this thesis. Additionally, the chapter includes future work in the

topic.

o Chapter 8: Conclusion – closes the thesis by summarising what has been done, the

results of the thesis and further discussion.

8

2. THEORETICAL FOUNDATION

This second chapter expands what is known about Threat Intelligence (TI) in the current

literature to lay the foundation for the thesis. It explains what TI is, its lifecycle, and intelligence

levels. Furthermore, Threat Intelligence Program’s challenges are discussed.

2.1. Threat Intelligence: A Trending Topic

Threat Intelligence (TI), also known as Cyber Threat Intelligence (CTI), is a field that allows

organisations to increase their visibility into the ever-changing threat landscape with the aim of

detecting and preventing threats before they hit them [3]. To achieve this, TI gathers vital

intelligence about the adversaries, their methods and techniques. The information that TI

provides can aid reinforcing the defences in the IT infrastructure as well as providing essential

intelligence to the decision-making process [4].

TI has been gaining momentum as an answer to the exponential growth in cyber-attacks and

crimes experienced in recent years. Only in the US, cybercrime cost an estimated $4.2 billion

in 2020 according the FBI’s annual Internet Crime Report [23]. Conversely, TI has an estimated

market value of $12.8 billion by 2025 [24]. Thus, the interest on the subject has been rising

steadily over the years. Figure 4 shows the popularity of TI according to Google: a score of 50

means that the matter is half popular whereas a value of 100 means that the term reached its

popularity peak [25]. The searches about TI have been increasing steadily on Google, showing

an upward attention:

Additionally, the number of publications with Threat Intelligence as a topic has been increasing

year per year [26] and its applications into Information Security processes have been widely

studied. For instance, TI can be used to proactively search for malicious activities (Threat

Hunting) within an Information Technology (IT) infrastructure [27]. Conversely, Security

Operations Centres (SOC) have also adopted TI to detect common and trending exploits [28].

The same intelligence is used in Vulnerability Management (VM) to know how the adversaries

operate and detect which vulnerabilities there are exploiting [8]. With this information, VM

Team can prioritise the patching process and reduce the attack surface the adversaries are using.

Figure 4: Threat Intelligence as Web Search on Google. Adapted from [25].

9

Although the research is still limited, TI is also being applied to critical sectors and services.

For example, Moustafa et al. [29] presented a Threat Intelligence scheme for industry 4.0

systems. Their proposal is based on Mixture-Hidden Markov Models (modelling technique for

randomly changing systems) which discovers anomalous activities against smart systems such

as automated factories and development on demand. On the other hand, Zhang et al. [30]

focused their efforts into building an attack prediction method based on Threat Intelligence of

the Internet of Things (IoT). Thanks to Support Vector Machine (SVM) learning algorithm,

their method is capable of obtaining malicious behaviour and extracting relevant intelligence.

Still, most of the research has been dedicated on new trends and threat intelligence sharing

issues. Only few studies have focused on exploring TI problems and providing possible

solutions [9].

2.2. Threat Intelligence Lifecycle

Threat Intelligence can help reinforcing the defences by providing intelligence. However, what

is exactly intelligence? Intelligence may be described as the process of transforming unknown

topics to understood [31]. In other words, when analysed and contextualised information

becomes intelligence [21]. Thus, a process is needed to go from data to intelligence.

The intelligence production process is a never-ending activity and should be considered a

circular process. To achieve this, Abu et al. [21] argued that Threat Intelligence should follow

a set of activities to transform the unknown into intelligence. In the survey they conducted,

different TI frameworks were analysed and concluded that the steps that are common on all

analysed frameworks are:

• Planning and Direction involves defining the objectives and business requirements that

will be used to know what information is needed to support decisions.

• Data Collection is focused on acquiring and provisioning the data. The data can be

collected from both internal (logs, events) and external resources (security bulletins,

vendors, TI providers).

Planning and Direction

Data Collection

Data AnalysisSharing and Evaluation

Utilisation

Figure 5: TI Lifecycle. Adapted from [32].

10

• Data Analysis includes the processing and analysis of the gathered data to produce

intelligence. The step translates the raw data into intelligence with significance and

implications. Arguably this is most important step and it’s the one that differentiates

information collection from intelligence [21].

• Sharing and Evaluation, where the quality of the intelligence is evaluated to see if it

meets the objectives and requirements. Finally, the intelligence is disseminated to the

relevant stakeholders for consumption.

As depicted on figure 5, there is an extra step called “Utilisation”. This is where the information

that has been collected during the lifecycle is put into action. This final step was introduced by

Bautista, an InfoSec practitioner guru, on his “Practical Cyber Intelligence” book [32] and

argues that once the intelligence has been produced, its value is comprehended when it supports

the stakeholders and their operations.

2.3. Intelligence Levels

Earlier it was explained that TI is used by defenders to try to be a step ahead of the adversaries;

however, as already mentioned, the main function of TI is to provide information to make better

security decisions [4]. Depending on the objectives and its audience, TI can be divided into four

subcategories or levels: Strategic, Tactical, Technical, and Operational [9],[15],[31],[33],[34].

Strategic Intelligence refer to the high-level intelligence about the threat landscape usually

used by C-suite and board level audience [15]. It includes non-technical data such as risk

information, and business goals to be used for long-term strategies. To prioritise budget based

on this knowledge, for example. This information is usually gathered from national or local

media, research reports and industry specific publications [11].

Operational Intelligence objective is to detect specific threats against the organisation [33].

This information may be collected from closed forums or dark web and the aim is to detect

hints or tips that may indicate a future attack to the organisation. As Tounsi and Rais discuss

[9], this information is rare to find due to the challenges an investigation on closed forums pose.

Although hard to gather, operational intelligence can be invaluable from a Vulnerability

Management perspective, it gives real-time exploitation information which can help with the

prioritisation of vulnerability patching.

When the objective of TI is to collect information about threat actors’ tactics, techniques, and

procedures (TTP), we’re talking about Tactical Intelligence [9],[33]. Here how the attackers

act and which tools they use needs to be understood. This threat analysis provides an insight on

how the adversaries conduct their attacks, which can help operational staff to respond

accordingly [31].

Technical Intelligence is information about specific attacks. The source of this intelligence is

normally collected from the threat actor’s behaviours, internal logs and indicators of

11

compromise (IoC) [9]. This information is of high value to the operation personnel like SOC

analysts, threat hunters and Incident Response [11], by accelerating their triage and increasing

their visibility. Thus, this intelligence is used for detection and remediation [31].

As seen on above figure, the different intelligence levels can be grouped depending on their

temporal usage term or by the detail level. For example, information that aids long-term

decisions can be classified as Strategic and Tactical Intelligence. On the other hand, Operation

and Technical focus of information to be used in short term. By detail level, Tactical and

Technical offer low level information. This means that they focus on technical specifications

like TTPs and other indicators. Conversely, Strategic and Operation make use of high-level

information to provide executive summaries.

2.4. Threat Intelligence Program

TI emerged to aid the decision-making process by providing intelligence [4] whereas a Threat

Intelligence Program (TIP) is simply a collection of processes and procedures to incorporate

Threat Intelligence into existing Information Security areas within an organisation [13]. This

section explores the literature to outline five of the main challenges TI faces, and consequently

TIP too, and some of the proposed solutions.

a) Low Levels of Information

TI deals with many data types and not all provide the same level of information. The most basic

forms of data are observables, the lowest level of information [35]. They provide stateful

properties or measurable events to identify a threat; however, this data type does not provide

Figure 6: TI Levels. Adapted from [23].

12

any intelligence and cannot be used alone to reach to any conclusion [35]. Indicators (also

known as Indicators of Compromise) are observable data with context. Common examples

include hashes, IPs, or domains that can indicate a breach or threat. Tactics, Techniques and

Procedures (TTP) refers to attack methods from a strategical point of view and provide a higher

level of information. Threat Actors are the attackers or malicious actors behind the attacks.

These could either be an individual or a group. Knowing who the attackers are can help security

practitioners detecting specific Indicators of Compromise (IoC) and recognising attack methods

used by the threat actors to later respond to the threats accurately and in a timely manner [36].

One of the main issues of TI is that many of the current tools concentrate only on observables

and indicators, the lowest levels of information. SpamHaus, VirusTotal, and Abuse.ch amongst

others are common examples of tools that focus on low level information. Although the

information they provide is essential for any IT practitioner, these tools are mostly focused on

“simple” or low-level types of indicators and don’t provide much visibility.

For higher-level information such as TTP and threat actor information, TI practitioners need to

look elsewhere. For instance, social media, hacker communities, and the dark web are rich

sources of IoCs; yet, collecting them is not a mundane task. In the last few years, academia has

been working to address this.

To extract and validate IoCs from web applications, Catakoglu et al. [37] developed a novel

system that automatically collects harmful items (web indicators) by using a high-interaction

honeypot. Their tests show that their approach is able to detect malicious sites that traditional

methodologies failed to spot. Sabottke et al. [38] created a tool that can extract potential

vulnerabilities from Twitter to know which ones are more likely to be exploited. The novelty

of their approach is that they use machine learning algorithms to discard the false positives

introduced by the adversaries. In other words, they solve the issue of adversaries poisoning the

dataset. Ebrahimi et al. [39] likewise worked on social media and successfully applied deep

convolutional neural network (CNN) to detect predatory discussions. According to their

experiments, their proposed architecture outperforms existing classification techniques. Deliu

et al. [40], on the other hand, focused their efforts to extract IoCs from hacker forums using

machine learning techniques. Samtani et al. [5] also made use of machine learning techniques

to develop AZSecure Hacker Assets Portal, a novel idea that not only collects malicious assets

from online hacker communities but also analyses them.

b) Timely Delivery

The intelligence should be delivered in time and with minimal latency [41]. This not only allows

intelligence recipients to prepare suitable responses, but also avoids wasting defenders’ time.

For example, if TI practitioners discover a new attack method, new security rules to stop the

attack need to be developed and distributed as quickly as possible. Any delay increases the

chances of being attacked.

13

The literature has explored different ways of delivering the information in time. For example,

a novel approach to automatically generate security rules from the intelligence without human

intervention was presented by Kim et al. [42]. Their framework periodically collects data from

external sources and automatically generates security rules to fight against the attacks [42].

Although the model does not verify if the indicator is still active, it does reduce the time needed

to apply a response.

c) Generic Information

Another challenge is that most existing approaches indiscriminately share the intelligence [14].

Simply put, the information is generic and likely to be irrelevant to the organisation. For

instance, trending exploits on Windows systems may not be useful for a company with Linux-

based infrastructure. Conversely, intelligence of threat actors specialised on attacking financial

institutions is unlikely to be relevant for a logistics organisation.

A solution to address this, Zhao and Yan’s team [3] propose a novel TI extraction framework

called “TIMiner”, which extracts IoCs automatically and produces categorised CTIs with a

domain tagging method. They demonstrate that, thanks to the domain tags, they can share the

industry-specific intelligence to the relevant stakeholders.

d) Sharing

The benefits of sharing the intelligence with other organisations should not be overlooked. The

common (and overly simplified) process of information sharing is as depicted on figure 7. An

organisation detects an attack on its infrastructure and gathers information about it. Then, the

target organisation shares the intelligence of this attack. Other organisations can consume the

intelligence and apply security rules to stop the attack thanks to the information provided by

the target organisation.

Figure 7: TI Sharing.

14

When an organisation shares the attack intelligence from a successful attack, as show on figure

7, it gives other firms the opportunity to learn from it. This allows other companies to be ready

and take countermeasures to stop the attacks. Without TI, every firm would be on their own and

would likely need to respond to the attack.

However, automatically generating and sharing the enormous amount of intelligence gathered

is a big issue in Threat Intelligence [3]. Manual inspection is not feasible as it can become a

massively time-consuming task given the vast volume of data that needs to be analysed. To

overcome this, different frameworks and standards have been developed like STIX [43], TAXI

[44] and OpenIOC [12]. These standards allow TI practitioners faster sharing, and faster

detection.

The STIX Project defines itself as [43] “a structured language for describing cyber threat

information so it can be shared, stored, and analyzed in a consistent manner”. In other words,

STIX (Structured Threat Information eXpression) is a language that defines the scope of the of

information that should be included as well as how the threat information structure should be

represented. The main objective of STIX is to define how to represent the information and not

to specify how it should be shared [6],[45].

TAXII (Trusted Automated eXchange of Indicator Information) was created to specify how the

information should be shared. This application layer protocol was also developed by MITRE

and its main objective is to define how the TI information represented in STIX format can be

exchanged over HTTPS [6],[44]. To achieve this, the standard defines a set of services and

message exchanges (RESTful API) as well as a set of requirements for the TAXII servers and

clients [44].

OpenIOC is a simple framework originally developed by Mandiant, currently owned by

FireEye, to assist in Incident Response investigations by sharing Indicators of Compromise

(IoC) in a machine-digestible format [12]. OpenIOC later evolved to define a standardised

format to record IoC in an extensible XML schema. The main objective is to easily share IoCs

across different organisations via automated solutions [6].

Table 1: TI Standards.

Standard Stands for Licence Main function

STIX Structured Threat

Information eXpression

Open community-

driven (MITRE)

A structured language to represent

the information

TAXII Trusted Automated

eXchange of Indicator

Information

Open community-

driven (MITRE)

A transport mechanism

exchanging information

OpenIOC Open Indicator of

Compromise

Open community-

driven (FireEye)

A framework to share information

15

The capacity to generate, refine and control data and then sharing it in an automatic manner is

considered one of the most important requirements for any successful Threat Intelligence

system [46]. Therefore, it may not be surprising that many scholars have worked on improving

intelligence sharing. Some researchers have studied the difficulties, such as trust issues, and

motivations for information sharing [47]. And as presented, different standards and frameworks

have also been proposed to structure information and facility its sharing. In addition to those,

other works have developed their own sharing models. For instance, the “Malware Information

Sharing Platform” (MISP) was designed by Wagner et al. [48] to allow collection and sharing

of IoC of targeted attacks and threat information. The aim of the platform is the same as any

Threat Intelligence Platform, to take preventive and mitigation actions against attacks.

Additionally, the proposal allows the sharing of intelligence between different actors. One of

the key differences with other sharing proposals is that Wagner’s team’s platform allows users

to make suggestions for changes to the intelligence, even if it was created by another

organisation [48]. This ability increases the effectiveness of the information as demonstrated

on their research.

e) Privacy and Legal Constraints

In this section different standards to share Threat Intelligence information have been explored,

but something that may go unnoticed is existing laws and regulations that may not allow the

sharing of this intelligence [49]. What is legal to share in one country may be illegal on another

[50]. For instance, information about IP addresses is not considered personal information in the

UK Data Protection Act; in Germany, on the contrary, IP addresses may be considered personal

information [14]. Thus, organisation planning or already sharing TI information need to

consider legal constraints and must ensure compliance with regulatory and legal requirements

like the General Data Protection Regulation (GDPR) in the European Union [51] and state and

federal laws in the US [52].

Many papers have explored and provided solutions for protecting personal data. Breaux and

Antón presented a novel classification method to grant or deny access rights to sensitive

information in order to comply with the US Health Insurance Portability and Accountability

Act (HIPAA) [53]. A specialised tool for privacy control to share sensitive datasets was

developed by Doorn and Thomas [54]. The tool assesses which level of protection is needed

based on the GDPR. Albakri et al. [55] also worked on the GDPR and the team presented a

model to aid in the decision making when sharing intelligence. Their model not only evaluated

GDPR legal requirements but also included advice on the protection level required.

16

3. LITERATURE REVIEW

This third chapter explains the literature review process and search procedure used on this

thesis. Additionally, the second research question of the thesis is answered with the findings

from the literature review. It continues by expanding literature concerning the actual problem

to support the reason of this study.

3.1. Literature Review Process

The framework developed by Brocke et al. [56], represented on figure 8, was used as a guideline

for this literature review. One of the main advantages of this framework is that it describes

literature review as a circular process, which gives further flexibility when the review needs an

extension or updating. An additional point in favour of Brocke’s team’s framework is the use

of Cooper’s taxonomy [57] as it can help defining the scope of the research in a clear way;

however, this does not mean that the works of Levy and Ellis [58], and Webster and Watson

[59] were overlooked. All of them provide good recommendations and tips.

Figure 8: Search Strategy. Adapted from [56].

First, the scope of the review was defined to limit the search topic. Theories and applications

related to Threat Intelligence were chosen as the focus of the literature review. The main goal

of the review was to investigate what has been done so far in the TI area to identify research

gaps. Also, to provide foundation on the topic.

For the conceptualisation of the topic, the search of the literature was initially carried out on

search engines such as DuckDuckGo. As TI is a relatively new topic in Information Security

(IS) and due to its rapid changing characteristics, books, whitepapers, and other non-scientific

Definition of Review Scope

Conceptualisation of Topic

Literature SearchLiterature Analysis

Research Agenda

17

papers (mostly from governmental organisation or Threat Intelligence vendors) were reviewed

to get a general idea of current industry trends and issues. SANS surveys and ENISA proved to

be of great help in this first step.

For the third step, Google Scholar with keyword-based searches and time filtering (generally

not older than 5 years) were used for the literature search process. The aim was to understand

where current investigations in TI are focusing. Then, scientific databases like IEEE, ACM,

and Springer to mention a few, were used to look for openly available (or accessible through

University’s library) but peer-referred publications which provided a more specialised view in

the TI field.

The analysis of the literature required to verify the reliability of the gathered literature. For this,

the material had to published, and originated from reliable sources. Additionally, the material

ideally had to come from peer-reviewed sources as there’s a strong link between peer-review

and quality [60],[61]. Although peer-review is arguably the most trustworthy characteristic of

scholarly material [60], disregarding papers that have not been reviewed may hinder the

progress of the research. While not peer-reviewed material is not be suitable to support projects,

such as medical and scientific studies, which require validity and a rigorous methodology, this

type of papers may be suitable for fast-changing fields. Thus, due to the rapid changes of

Information Security and to provide an insight on what’s being done outside academic circles,

some whitepapers and non-scientific papers were used too.

The fifth and last step of the literature review was to scrutinise the gathered material to ensure

that the content was of relevance for this thesis. The selected literature was then used to generate

the research base, foundation of the topic, and related work.

The literature review process was carried out three times. The first “loop” aimed to gain

foundational knowledge of the topic and identify gaps in the research. The second time limited

the scope to the identified gap: lack of literature to start a practical Threat Intelligence Program.

Here, what has been done so far to fill this gap was reviewed. Lastly, the literature review

focused on the evaluation of a TIP. The objective was to understand what makes a TIP

successful.

The literature review has been divided into two parts in order to answer the two research

questions of this study. The first section reviews work that tried addressing how to start a Threat

Intelligence Program. The second section explores the literature that focuses on the evaluation

of TIP.

3.2. Establishing a Threat Intelligence Program As described on the introduction, a Threat Intelligence Program is a collection of processes and

procedures to make Threat Intelligence actionable [13] . However, there are few studies that

address how to start and establish a TIP and improve its practice. Thus, this section explores

18

works in academia and other initiatives that have focused on providing direction on how

improve the practice of Threat Intelligence and Threat Intelligence Platforms.

Multiple initiatives from both governmental and private firms have been created to improve the

practice of Threat Intelligence. Here we’ll review four of the main ones:

ENISA’s (European Network and Information Security Agency) is the European Union’s

agency for cybersecurity and its main objective is to achieve a mutual level of cybersecurity

between Member States and encourage information sharing amongst them [62]. This agency

has been providing threat landscape reports over the years which provide comprehensive

analysis of current and emerging threat trends [63]. Additionally, ENISA has multiple resources

about TI to introduce the topic, explore standards and tools and study software vendors [64].

ENISA also presents some recommendations to help organisations mature their TI capabilities,

but there is not much detail on how to accomplish them and doesn’t provide guidelines to start

a TIP.

Another example is the NIST (National Institute of Standards and Technology), which is a non-

regulatory agency from the U.S. Department of Commerce, where its mission is to promote

innovation by developing standards, procedures, and methodologies [65]. This federal agency

has published a guideline for establishing and participating in threat information sharing

relationships [41]. The publication starts with basics of Threat Information and explains the

benefits and challenges of sharing this intelligence. It continues by providing guidance on how

to establish and participate in sharing relationships with the community. Although this guideline

may be used as starting point in a Threat Intelligence Program, its main objective is guiding in

the process of exchanging information, in other words, on how to cooperate with other agencies

and/or organisations [28]. The publication starts with basics of Threat Information and explains

the benefits and challenges of sharing this intelligence. It continues by providing guidance on

how to establish and participate in sharing relationships with the community. Although this

guideline may be used as starting point in a Threat Intelligence Program, its main objective is

guiding in the process of exchanging information, in other words, on how to cooperate with

other agencies and/or organisations

Cyber Threat Alliance (CTA) is an independent, non-profit organisation composed of several

private-sector companies with the objective of sharing their intelligence. CTA’s automated

platform is a novel sharing platform delivering around 15 million observables per quarter with

the aim of improving their members security posture [66]. Different private organisations work

together to address specific challenges and collaborate to detect and fight against emerging

threats [67]. Although the intelligence is restricted to members only, CTA also provides regular

webinars and comprehensive information about TI to the community. Nevertheless, they do not

provide information on how to get a TIP up and running.

SANS Institute is an information security training and security certification education

organisation. In addition to training, SANS also provides early warning systems, news, and

19

vulnerability digests as well as over 1,200 security whitepapers [68]. Furthermore, hundreds of

Threat Intelligence whitepapers and podcasts are available for the information security

community. Topics range from building taxonomies for technology threats to mining the dark

web for intelligence, yet the papers are not peer-reviewed and hold little academic value [69].

Table 2: TI Initiatives.

Lots of resources on the Threat Intelligence topic can be found in government bodies and

specialised organisations – and table 2 summarises the reviewed ones; yet, they do not cover

how to initiate a practical Threat Intelligence Program in an organisation. Work done in the

academia is to be reviewed next.

Brown & Serrano [70] defined requirements and expectations that a TIP should meet. Their

work focused on solving key challenges like working with multiple sources, enriching the data,

and determining the relevance of the intelligence, which can help maximising the value of the

intelligence collected and translating the findings into actions and decisions [70]. Although the

researchers argue that implementing the identified requirements can maximise the value the

value of the intelligence and translate them into action [70], describing how to initiate a program

was out of their scope.

Shin and Lowry [71] proposed a TI capability model that prescribed key necessary competences

for engaging in TI activities. The authors introduced three dimensions on their model: analytical

component capability, contextual response capability, and experiential practice capability.

Analytical component capability refers to the ability to manage the analytical parts of TI

operations to facilitate the discovery and analysis. Contextual response capability represents the

aptitude of managing the infrastructure to respond to evolving threats. Lastly, experiential

practice capability means the ability to provide solutions to the threats. Their model does

provide guidance on how to make use of TI, yet it does not guide organisations in their quest

of starting their own TIP.

Many models, frameworks and platforms targeting different TI sub-domains have been

proposed during the years; still the “Threat Intelligence Framework” developed by

Gschwandtner et al. [8] is the only known research paper that tries addressing how to start a

TIP. Although their original objective is to analyse the benefits of integrating TI into the

Vulnerability Management process within an organisation, they developed a TI framework to

Initiative Sector TI Topics

ENISA Governmental Encourage information sharing

Threat landscape reports Reviewal of tools, standards, and vendors

NIST Governmental Guidance to share intelligence

CTA Non-Profit Intelligence sharing (only for members) Address TI challenges

SANS Private Training

Whitepapers

TI News

20

answer to their research question. This framework is composed of four process models as

represented on figure 9:

Figure 9: TI Framework. Extract taken from [8].

The first activity is “definition and strategic alignment” that includes the step of asset

identification within the organisation to produce an asset list. The list should contain the assets

that need to be protected because they hold a business value. In this first activity, existing

defence systems need to be identified too as well as performing a risk assessment to evaluate

threats and adversaries. The second activity called “integration of TI” is where technical

requirements for the integration are evaluated. This means that TI suppliers need to be

compared by evaluating their data quality and integration with current Information Security

assets such as antivirus, firewalls and SIEM. Next activity is the definition of “utilisation of

TI”. This includes the creation of new processes or the enhancement of existing ones. An

example used by the authors is the integration of TI on vulnerability reports to enrich the

information. The last activity called “update and share TI” is the activity of setting rules and

guidelines for TI data evaluation as well as the definition of the TI data sharing process. This

either can include the sharing of malicious indicators, vulnerabilities that should be prioritised,

and other reports.

The framework demonstrated positive impacts in the organisation’s Information Security

processes, specifically in the Vulnerability Management area; yet Gschwandtner’s Team [8]

argues that further research is needed to evaluate the potential benefits of TI with other

Information Security management systems. They also point out that how TI can be integrated

in other Information Security processes, like read team exercises, requires more work too.

Additionally, as a future work, they suggest implementing their framework on diverse

organisations to evaluate its suitability which can help proposing adaptations and

improvements.

3.3. Evaluating the Establishment of a Threat Intelligence Program The literature review was conducted not only to find works that tried addressing how to start a

TIP, but also to explore the literature that focused on the evaluation of Threat Intelligence

Programs. The aim of this part of the literature review was answering the second question of

this thesis, which aimed to evaluate the success of the establishment process of the TIP.

The reviewed literature does not cover the evaluation of TIP establishment process itself.

Instead, the works explore and propose different models and techniques to evaluate the

efficiency of Threat Intelligence Programs. Though, if a TIP works well, then its establishment

process can be considered a success. This section explores some of the existing works that can

Definition and Strategic Alignment

Integration of TI

Utilisation of TI

Update and Share TI

21

be used to evaluate how well a TIP performs. A summary of the evaluation parameters is

provided at the end of this section.

A way of evaluating a Threat Intelligence Program is by measuring the quality of the

intelligence it produces. But, how can Threat Intelligence analysts know that this data is of good

quality? As NIST defines [41], good quality of data needs to be relevant, timely, actionable,

and accurate. So, how can TI practitioners and analysts be confident that their data is of quality?

Many papers have contributed to examine the quality of the data. Sillaber et al. [72], for

instance, recommend the implementation of automated data quality error checks because

manually entered information is prone to quality issues. They come to this conclusion after

surveying ten security operations centre experts from different organisations. Al-Ibrahim et al.

[73] take a different approach to evaluate data quality by assessing the level of contribution of the

security community members. They introduce the notion of Quality of Indicators (QoI) to measure the

quality of the data and the indicators they use are correctness, relevance, uniqueness, and utility.

Their experiments demonstrate that their QoI can verify real-world data accurately by

evaluating the members that share the data, which leads to successfully classify low and high-

quality contributions.

Protiviti [74] proposed a model to evaluate the data quality of a Threat Intelligence Program.

The author’s approach argues that quality of data must be consistent and repeatable intelligence

output. Figure 10 summarises the characteristics of a TIP output:

Figure 10: Threat Intelligence Program Output. Based on [74].

According to the author, this output should contain context because data without context does

not provide any value. Thus, a successful TIP should contextualise data and align it with the

business objectives. Also, the output should not only provide information about the indicators

Intelligence

Context

TTP

IndicatorsImplications

Actionable Advice

22

but also information about the threat mechanics. In other words, it should provide intelligence

on how the attacks work by understanding the attackers’ tactics, techniques, and procedures

(TTP). Then, the implications of the threat need to be assessed. For example, does this

intelligence have any meaning or value to the organisation? Is it relevant? What would be the

consequences within the organisation if the threat were successful? Finally, the intelligence

needs to contain actionable advice to take decisions and preventative actions.

A novel way of evaluating the quality of the data is suggested by Bianco [75]. The author

developed a diagram called the “Pyramid of Pain” (figure 11) in which he shows the

relationship between the types of indicators defenders can use to detect and stop attackers’

activities:

Figure 11: Pyramid of Pain. Adapted from [75].

Inside the pyramid, different types of indicators are shown:

• Hash Values: This refers to hashes that correspond to suspicious or malicious files.

Normally used to identify and detect malware samples.

• IP Addresses: Known IP addresses or netblocks used by the attackers.

• Domain Names: Malicious domains or URLs.

• Network/Host Artefacts: Any observable on the defenders’ network that may indicate

malicious activity. For example, embedded Command and Control (CnC) traffic in a

legitimate protocol, URI patterns, etc.

• Tools: Utilities or software used by the attackers to carry out their activities.

• TTPs: Tactics, Techniques, and Procedures (TTP) – how the attackers accomplish their

mission. Their modus operandi.

Bianco [75] argues that if a defender responds to the indicators that are higher up in the pyramid,

the defender will increase the attacker’s level of “pain”. For example, if defenders only focus

on blocking IPs used by the attackers, the adversaries will simply use another IP. On the other

hand, if a defender responds to the adversary’s TTPs, the attackers will be forced to learn new

23

behaviours, which is a highly time-consuming task. In this situation, the attackers will need to

reinvent themselves or just give up.

The main take from the “pyramid of pain” is that the depending on the types of indicators

defenders use, the adversaries will experience different levels of inconvenience, or pain, during

his activities. The higher the “pain”, the more likely that the defenders will successfully deter

the adversaries [35]. Thus, the higher the data in the pyramid, the better its quality will be.

De Melo e Silva et al. [76] proposed a methodology to evaluate TI platforms and standards

based on the 5W3H (what, who, why, when, where, how, how much and how long) approach.

The “what” defines the area to be studied, in the TI context, this refers to threats. The “where”

refers to the source of the threat and the path it takes until reaches the objective. The time frame

or time of occurrence is represented by the “when”. The “how” describes the techniques and

methodologies used by the threat. The “who” is associated to the threat actor and the “why”

explains its motivations. Lastly, “how much” measures the damage capacity of the attack and

the “how long” the effective durability of the attack. The researchers argue that the 5W3H

methodology can be used to obtain a complete description of a topic. In this case, a TIP that

follows the 5W3H approach is likely to produce actionable intelligence for defence or other

course of action [76].

In another perspective, different models have been also created to evaluate the current situation

of a Threat Intelligence Program. For instance, the “Detection Maturity Level” (DML) proposed

by Stillions [77] is a capability maturity model to describe how organisations consume and use

threat information. The model follows two key principles: the maturity of an organisation is

measured on their ability to act upon intelligence, in other words, how they apply the

intelligence; and the organisation needs to detect the threat to have a chance to respond. Figure

12 conceptualises the main idea of the DML model:

Figure 12: TI Simplified. Adapted from [77].

According to the author, Threat Intelligence should provide three functionalities: intelligence

gathering (intel), detection, and response. As depicted on above diagram, each step directly

impacts the ability of the latter to work optimally. This means that an organisation should be

proficient in one activity before moving to the next. To assess if a TIP is proficient in any of

the areas, the DML includes nine maturity levels (0-8), with the highest level being the most

abstract and the lowest level the most technically specific [77]. The basic concept of the model

is that the higher-level information a TIP manages to gather and act upon, the higher its maturity

is. This is similar to the concept presented by Bianco’s Pyramid of Pain.

Intel Detect Respond

24

Other works have argued that a successful TIP should provide other main key functionality to

be valuable: Prediction, Prevention, Detection, and Response [78],[100]:

• Prediction tries identifying what will attack next, where, and how. The intelligence

provided here will aid security operators know which attack is more likely to hit their

infrastructure, and how it can happen. Prediction can be used by Vulnerability

Management (VM) Teams, where the intelligence can help prioritising which patch to

install first [8],[9].

• Prevention is all about hardening the potential security gaps within target assets and

mitigating risks based on the intelligence from the prediction. Additionally, current

solution effectiveness should be improved. Prevention is invaluable for systems and

network administrators to block known IoCs or apply workarounds to harden the assets

[11].

• Detection is the functionality responsible of continuous monitoring to discover and

qualify incidents. The main objective is to detect the threat as quickly as possible.

Detection is a task usually assigned to SOC Teams, where they can detect malicious

activities thanks to the intelligence [10]. Threat Hunters can also make use of this

intelligence with the same purpose [9],[33].

• Response comes when the threat becomes an incident. This involves investigating the

incident, neutralising and mitigating the attack, and the recovering process. For dealing

with incidents, Incident Response Teams can rely on TI to understand how the attack

was successful which will ultimately help them neutralising the attack [11],[12].

The GDPR (General Data Protection Regulation) has created legal challenges for organisations

that wish to share information, especially those that are under EU jurisdiction. Thus, TIP should

also be evaluated from a legal point of view. To overcome this, Albakri et al. [55] define the

legal aspects that GDPR has brought to the sharing of intelligence. In addition, they present a

model for assessing legal requirements and provide guidance to avoid breaching laws and

regulations [55]. Organisations shouldn’t overlook this novel approach if they want to ensure

that their TIP is in compliance with the GDPR.

The works reviewed in the literature explore and propose different models and techniques to

evaluate the efficiency of Threat Intelligence Programs. One may argue that if a TIP is efficient,

its establishment can be considered successful. Three main evaluation parameters were

extracted from the literature review, which can be used to answer to the second research of the

thesis: Quality of intelligence, intelligence usage, and legal aspects. Below table summarises

them:

Table 3: Evaluation Parameters.

Parameter Description Evaluation Method

Quality of Intelligence How quality is

measured

▪ Relevant, timely, actionable, and accurate [41]

▪ Actionable, have context, provide IoCs and TTPs, and explain the implications of the threat [78]

25

▪ Correct, relevant, unique, and useful [73]

▪ Error-free [72] ▪ 5W3H. [76]

▪ Pyramid of Pain. Amount of “pain” or difficulties

infringed to the attacker [75]

Intelligence Usage How the intelligence is consumed and

applied

▪ DML: Gathering, Detection, and response [77] ▪ Prediction, Prevention, Detection, Response [78],

[100]

Legal Legal aspects ▪ Model for assessing legal requirements and provide guidance to avoid breaching laws and

regulations [55]

Quality of intelligence refers to the measurement of the quality of the output produced by the

TIP. Many works have defined how to measure the quality. For example, NIST argues that the

intelligence should be relevant (applicable to the organisation), timely (produced and shared

fast), actionable (usable), and accurate (correct) [41]. Sillaber et al. [72] agreed with the

accurate characteristics; however, they referred to it as “error-free”. Other works define quality

as intelligence that is actionable, has context, provides Indicators of Compromise as well as

explain the TTP used by the attackers. Additionally, the intelligence should also explain the

implications and impact of the threat. Al-Ibrahim et al. [73] added a new characteristic not

contemplated by other works: unique. De Melo e Silva et al. [76] proposed their own

methodology based on the 5W3H (what, who, why, when, where, how, how much and how

long) approach. The main take from their methodology is that if a TIP follows the 5W3H

approach, it will produce quality intelligence. Other models to measure the quality is the

Pyramid of Pain, developed by Bianco [75], which the higher the pain an attacker needs to

invest for their attack, the higher the quality of the intelligence is.

Regarding the second parameter, the intelligence usage, focuses on how the intelligence is

consumed and applied. Intelligence can be used for prediction (what will attack next, where,

and how), prevention (hardening the potential security gaps), detection (notice the threat as

quickly as possible), and/or response (neutralising and mitigating the attack) purposes,

depending on the organisation needs. Stillions [77], on the other hand, argued that a TIP should

provide intelligence gathering, detection, and response capabilities. Each capability directly

influences the ability of the latter to work optimally; thus, it is important to be proficient in one

before moving to the next.

The last parameter is concerned with legal aspects. Organisations may decide sharing their

intelligence, yet they will need to ensure that they don’t breach any laws and regulations. The

model developed by Albakri et al. [55] define the legal aspects that GDPR has brought to the

sharing of intelligence. The model can be used to assess legal requirements and provide

guidance to avoid breaching any legal restriction.

26

3.4. Research Gap

This third chapter has reviewed current literature to improve the practice of Threat Intelligence.

The literature review was divided into two sections to explore what has been done in terms of

starting and establishing a TIP (research question 1), and how it can be evaluated (research

question 2). Table 4 summarises the works used in the building of the related work for this

literature review:

Table 4: Selected Material for Review.

Title

Author(s) Journal /

Conference

Related Work

Establishment

of TI

Evaluation

of TI

From cyber security

information sharing to

threat management

Brown & Serrano,

2015 [70]

ACM

A review and theoretical explanation of the

‘Cyberthreat-Intelligence

(TI) capability’ that needs to be fostered in

information security

practitioners and how this can be accomplished

Shin & Lowry, 2020 [71]

Computers & Security

Integrating threat

intelligence to enhance an

organization's information security management

Gschwandtner et al.,

2018 [8]

Availability,

Reliability

and Security

Guide to cyber threat

information sharing

Johnson et al., 2016

[41]

NIST

ENISA Threat Landscape 2020

ENISA, 2020 [63] Whitepaper

ENISA Publications ENISA, 2021 [64] Online

resource

Cyber Threat Alliance: Resources

CTA, 2021 [66] Online resource

SANS: Reading Room SANS, 2021 [7] Whitepaper Data quality challenges

and future research directions in threat

intelligence sharing

practice

Sillaber et al., 2016

[72]

ACM

Beyond free riding: quality of indicators for assessing

participation in

information sharing for threat intelligence

Al-Ibrahim et al., 2017 [73]

arXiv

Cyber Threat Intel: A State

of Mind

Protiviti, 2016 [74] Online

resource

The Pyramid of Pain Bianco, 2013 [75] Online resource

27

SCERM—A novel

framework for automated management of cyber

threat response activities

Iqbal & Anwar, 2020

[35]

Future

Generation Computer

Systems

The DML Model Stillion, 2014 [77] Online

resource

Cyber Threat Intelligence:

What it is and how to use it

effectively

AFKC, 2018 [78] Whitepaper

A Methodology to Evaluate Standards and

Platforms within Cyber

Threat Intelligence

De Melo e Silva et al., 2020 [76]

Future Internet

Sharing cyber threat

intelligence under the

general data protection

regulation

Albakri et al., 2019

[55]

Springer

Best Practices for

Applying Threat

Intelligence

Recorded Future,

2020 [100]

Online

resource

Section 3.2 (Establishing a Threat Intelligence Program) presented the initiatives from ENISA,

NIST, CTA, and SANS that try improving the practice of TI. The same section also explored

the academic works to answer to the same research question. Some authors have focused on

defining requirements and expectation a TIP should meet; while others have presented

capability models to prescribed necessary competences to make use of TI. The reviewed

materials have contributed in the improvement of TI practice, yet they don’t solve the problem

of this research. To date, the framework developed by Gschwandtner et al. [8] is the only known

research paper that tries addressing how to initiate a TIP. Their framework had positive results

when implementing a TIP within Vulnerability Management activities; nevertheless, more

work is needed if the framework is to be integrated in other Information Security processes.

Section 3.3 (Evaluating a Threat Intelligence Program) explored the works that can be used to

evaluate a TIP. Some researchers have defined the quality of the data produced by TI to assess

its efficacy. On the other hand, additional models have been developed with the aim of

evaluating the maturity of existing TIP by arguing which functionalities should a TIP provide,

while other researchers have explored the evaluation from a legal point of view. To answer to

the second research question, which asks how the success of the establishment process of a TIP

can be evaluated, three main evaluation parameters were extracted from the literature review:

Quality of intelligence, intelligence usage, and legal aspects.

As demonstrated on the literature review, there is a gap in knowledge in how to start and

establish a Threat Intelligence Program in organisations. It is worth noting the lack of material

from peer-reviewed journals too. Hence, this thesis will try addressing this gap by proposing a

framework to guide organisations starting a practical TIP which can support multiple

Information Security areas. Furthermore, the framework will also provide a self-evaluation

28

mechanism to evaluate the TIP’s success. Thanks to the framework, the first research question

is answered.

29

4. RESEARCH METHODOLOGY

This fourth chapter includes a description of the opted methodology for this research. It

explains what Design Science Research (DSR) is and how it was applied to this study.

4.1. Qualitative Approach

Qualitative research is concerned with qualitative phenomena, in other words, it’s concerned

with qualities that cannot be reduced to numerical values [79]. This methodology is exploratory

in nature and it’s generally used when a deeper understanding of an issue or complex situation

is needed [80]. Additionally, this method enables researchers to collect information from topics

where little is known [81] by helping to define what is important: what needs to be studied [80].

In qualitative research, researchers understand the problem from the point of view of the

involved stakeholders. This insider perspective allows scholars to have a better view of the issue

by studying it in its natural setting [82]. For this thesis, the ability to gain insiders’ perspective

was particularly useful as it helped understanding what the organisation needed in terms of

Threat Intelligence.

The methodology in qualitative research is often iterative. This allows researchers to go back

and forward between the data collection and its analysis as they learn more about the topic [80].

This is valuable on open-ended research questions, complex situations, or when an artefact is

built and improved on feedback [80]. Thus, a qualitative approach deemed to be more suitable

to not only better comprehend the issue addressed on this thesis but also to improve the artefact

based on the respondents’ experience.

For the analysis of the data, researchers following qualitative research make a substantial use

of inductive reasoning. This is due to the nature of the research objective, which is aimed to

generate a theory based on the data. Thus, qualitative research ends with a hypothesis [83]. In

this case this applies by maintaining an inductive tactic involving observations and reasoning

to later develop an end of study artefact: a framework.

4.2. Research Structure

A research is a systematic and organised effort to answer to intellectual questions and practical

problems [79]. In other words, it’s the process of collecting, analysing, and evaluating data in

order to gain new knowledge or increase understanding [80]. Addressing an issue, solving a

problem, or answering to a question is what research is all about.

Different research methodologies exist to guide the choices researches make during their quest.

Although most research methods follow similar steps, they have different approaches

depending on the research question there are trying to answer. The purpose of this research is

30

to improve the practice by developing a framework to start a Cyber Threat Intelligence

Program, in other words, the thesis has a utilitarian perspective. Consequently, Design Science

Research (DSR) was chosen as research methodology.

Design Science Research is a research method focused on developing knowledge that can be

used to design and implement models, processes, and frameworks to reach to practical

objectives [84]. This methodology is driven by opportunities or field problems. Simply put,

DSR seeks for knowledge from real-life problems or opportunities that have significant

practical relevance. As the thesis project tries to solve a real-life problem by implementing a

framework, DSR method was followed as methodology.

Design Science Research brings theoretical and practical rigor to information system research.

Different genres of DSR exist and they vary depending on the problem being addressed, the

type of artefact, and the contribution [85]. As represented on figure 13, four main DSR genres

exist [101]:

Figure 13: DSR Genres. Adapted from [85].

Computational genre is focused on research that develop artefacts such as algorithms, analytics

methods, or data representations [85]. Optimisation class contributes at the creation of artefacts

that try solving organizational issues and problems like profit maximisation, and optimisation

of operations, management activities, or supply chain events [85]. Conversely, representation

supports the development of software artefacts to evaluate existing modelling methods using

analytics [85]. Lastly, IS Economics contributes to artefacts that focus on the role of Information

Security in economic activities [85].

This research developed an artefact to solve a business problem, in this case a framework to aid

in the initiation of starting a Threat Intelligence Program. Thus, the research followed

Optimisation genre: the research starts with a problem that organisations are facing, an artefact

is designed to address the issue, and a demonstration is carried out to show that the artefact

works as intended.

Design Science Research methodology offers guidelines for effectively addressing a research

problem. The methodology includes six sequential stages [86],[87]:

Computational

Novel computational

artefacts

Optimisation

Artefacts to solve problems faced by

organisations

Representation

Artefacts to enhance

representation of domain phenomena

in IS

IS Economics

Artefacts related to economic domain

31

The DSR model is serial yet allows for iteration at the evaluation and communication stages if

needed. For example, if the artefact does not meet the evaluation criteria or issues are found,

DSR allows going back to the design stage to address the problems and improve the artefact.

Step 1: Problem Identification

In this first step identifying a need or a problem that needs solving is carried out. Here the

research problem is not only identified but also justified. Additionally, this first stage also

serves as foundation of the topic as well as shows the current state of the problem in the

literature.

The problem justification is covered in chapter 1. As mentioned earlier, Threat Intelligence

emerged to fight against attacks by detecting and preventing threats before they become

incidents. This proactiveness hasn’t gone unnoticed amongst organisations. Thus, firms

have started to integrate processes and procedures into their existing Information Security

areas by developing their own Threat Intelligence Programs.

However, there is a lack of frameworks to guide organisations to start their own Threat

Intelligence Programs. This has led many organisations to develop ineffective TI Programs,

which end up producing irrelevant data that cannot be used by their IT staff. Hence, this

research is meant to produce an artefact to address this problem.

Chapter 2 served as foundation of what is known about Threat Intelligence and Threat

Intelligence Program. Chapter 3 included a literature review to present what has been done

to address the issue and demonstrate the research gap.

Step 2: Objectives Definition

The second activity of this methodology is the definition of the objectives of the artefact to

be built. The aim of this study is to produce an artefact that can be utilised to establish a

practical TIP. Therefore, the objectives should be relevant to the scope of this study. The

criteria that the artefact should meet is based on the findings from the literature review and

from what was learned from the interviews.

Figure 14: DSR Process. Adapted from [84].

32

Step 3: Design and Development

The design and development stage in DSR methodology is about the design process and the

artefact. The design process explains the process of the creation of the artefact while the

development is concerned with the creation of an artefact which did not exist before [84].

Th design of the artefact followed an iterative process as figure 15 represents. In the first

iteration, a draft of the artefact was created based on the theory from the literature review

and the comments from the respondents. In the next iterations, the feedback of the

respondents was used to amend and improve the artefact. Finally, the artefact was tested

(step 4) and evaluated (step 5).

Figure 15: Design Process.

Step 4: Demonstration

Demonstration step, in the DSR methodology, usually shows how the artefact solves the

problem. However, this thesis will not demonstrate how the artefact works as it won’t be

tested on a case-scenario. Instead, the demonstration will show how the artefact can be

implemented within an organisation. Thus, this step refers to implementation of the artefact.

Chapter 6 contains details to describe what needs to be done to implement the artefact and

establish a Threat Intelligence Program.

Step 5: Evaluation

The continuous evaluation of the artefact is deemed critical during its creation as it gives a

better insight of the problem and provides feedback with the aim of improving the artefact’s

quality [85],[89]. Thus, DSR allows iteration from the evaluation to the design phase to

improve the artefact. If the evaluation stage doesn’t meet the criteria, the artefact may need

to be redesigned or amended [89].

The evaluation was divided into two parts: ex-ante (before an occurrence) and ex-post (after

an occurrent) [102], which are presented on Chapter 7. The occurrence in this case is the

artefact presented on this thesis. As depicted on figure 16, ex-ante is concerned with

validating that research approach followed a disciplined process whereas ex-post measures

how well the artefact worked.

• Literature Review

• Comments

Iteration 1

• Feedback

Iteration 2• Feedback

Iteration N

• Demonstration

• Evaluation

Final Iteration

33

The main objective of adding an ex-ante evaluation was to know if the correct approach

was followed before the artefact was built. This provides a more rigorous approach to

knowledge creation [88]. Ex-ante evaluated the purpose of the research by justifying that

the problem exists and assessed existing solutions to the problem. Literature review

provided the means to evaluate the purpose of the research and existing solutions by

producing a problem statement and research gap.

How well the artefact works was measured at ex-post. In DSR, the justification of the

artefact is not concerned with the truth but with the effectiveness of the design [84]. For

this, the artefact needs to prove that it “works” and satisfies given specifications [89]. These

specifications, or criteria, were retrieved from the first interview with “Acme Corp” staff.

Thus, for the evaluation, logical argumentation was used by comparing if the defined

criteria were met by the solution. Additionally, expert opinion from the case study was

obtained to evaluate the suitability of the artefact.

Step 6: Communication

Conceptualise and generalise the findings in a descriptive manner to researchers and

relevant audiences both business and technical, in other words, the writing and presentation

of the thesis is the final step of the DSR. The thesis will follow the common structure for

empirical research papers [90] problem definition, literature review, research methodology,

results, discussion, and conclusion.

4.2.1. Data Collection

Regarding the data collection process, this research gathered the data mainly via interviews; yet

other sources such as observations and documentation available at the organisation were also

consulted. The interviews were conducted to understand the needs and views of the respondents

in order to design the framework. Additionally, the interviews were also used to evaluate the

artefact. For the interviews, first a list of suitable respondents was selected. The personnel had

to belong to a department with relevance to IT and had to have a security background.

Figure 16: Ex-Ante & Ex-Post.

34

Additionally, the respondents had to be involved, directly or indirectly, with the Threat

Intelligence Program project:

o Respondent 1: SOC Team Leader

Responsible of managing the SOC (Security Operations Centre) team and owner of the

Threat Intelligence Program project. The team consists of tier 1 (triage), tier 2 (incident

responders), and tier 3 (threat hunters) analysts. The team is responsible of monitoring and

fighting threats, develop detection rules for the SIEM, assess security systems and provide

improvements.

o Respondent 2: Vulnerability Management Analyst

Personnel responsible of detecting weaknesses in the IT infrastructure and taking measures

to correct them. Daily tasks include tracking vulnerabilities and their mitigation by scanning

possible issues on both end-devices and networks. The Vulnerability Management (VM)

analyst is not involved in the development of the TIP; however, this person will make use

of the intelligence provided by the new project.

o Respondent 3: Threat Hunting Analyst

Security professional responsible of proactively detect, isolate, and neutralise advanced

threats that cannot be caught by conventional methods like SIEM or Tier 1 SOC members.

This analyst is directly involved in the development of the TIP.

o Respondent 4: Red Team Analyst

Member of the Red Team which assume the role of the “enemy” to provide security

feedback from the attacker’s perspective. This analyst is an employee of the organisation

and has in-depth knowledge of the infrastructure and its defences. This respondent will be

an end-user of the TIP and won’t be involved in the development.

The interviews were conducted over the phone and contained both structured and unstructured

question to allow a better conversation. Five interview sessions were carried out: The aim of

the first interview was to understand what the respondents were expecting from the Threat

Intelligence Program as well as how the new intelligence would help them with their duties.

The next three interviews were limited only to the respondents 1 and 3, which were involved in

the development of the TIP project within their organisation. These interviews were used to get

feedback on the design of the artefact while it was being built. Last interview, conducted only

to respondent 1 and 3, was carried out at the end of the TIP project to gather their opinions and

insights regarding the usefulness of the artefact built in this thesis.

4.2.2. Data Anonymisation

The data collection process avoided gathering sensitive data from the test organisation.

However, due to privacy concerns, any identifiable information was sanitised to prevent any

private and confidential data to be shared with anyone outside the organisation. The main

35

objective was simply to protect the firm and the respondents. To achieve this, respondents are

not referred by their name but by their job position. Additionally, the organisation’s name is

not disclosed, and the document refers to it as “Capsule Corp” as placeholder.

4.2.3. Thematic Analysis

Thematic Analysis (TA) is a method to “identify, analyse, and report themes (patterns) within

data” [91]. This analysis type is used to explore the context of open responses from interviews

or surveys while enabling flexibility and interpretation when analysing the data [92]. As this

research conducted interviews to understand what respondents expect from a TIP as well as to

get feedback on the artefact, TA was used to interpretate the data.

The TA process followed in this thesis is Castleberry and Nolen’s framework [92]:

• Compiling – Once the collected data has been organised, the aim is to read and re-read

the data from the interviews to have a better understanding of its meaning.

• Disassembling – This involves separating the data into meaningful groupings to identify

concepts, themes, or ideas. For this, specific questions of the data are to be carried out

to identify the “what”, “when”, “how”, “who”, and “where”.

• Reassembling – the groupings from the previous step are to be mapped to capture

themes, which are “something important about the data” [91]. In other words, themes

are patterns or something meaningful to this thesis.

• Interpreting – this step involves making analytical conclusions. The interpretation

should be illustrative of the raw data and in context with the research topic.

• Concluding – refers to the writing of the report with the conclusions of the TA process.

Thematic analysis describes how the interview data was processed and analysed. It showed how

the respondents perceived a TIP and what they were expecting from it. Additionally,

respondents’ views and feedback were used to improve, adapt, and evaluate the artefact of this

study. The results of the TA are present in chapter 7 of this document.

36

5. DESIGN

This fifth chapter describes the design process and the iteration phases that built the

implementation framework. Each iteration added new features, which were later reviewed by

the stakeholders. At the end of the chapter, the final artefact is presented

5.1. Development Process The design followed an iterative process which had three major iterations: In the first iteration,

the initial artefact was developed based on the findings from the first interviews and learnt

theory from the literature review. The second and third iterations included changes and

improvements after the artefact was reviewed by the respondents directly involved in the TIP

development (respondent 1 and 3). This means that the framework was evaluated at the end of

each iteration by the stakeholders and their feedback was implemented in the subsequent

iteration. The details of each iteration are presented on a time of the development phase:

5.1.1. Iteration 1

The initial design of the artefact, which is based on the findings from the first interview and

theory from the literature review, is presented in this iteration. Then, the evaluation of the

artefact is reviewed with the stakeholders.

Figure 17: Development Process.

37

Design

The initial design of artefact was concerned with the creation of a draft, which contained some

of the basic stages needed to initiate a Threat Intelligence Program. Based on the literature

review, three major stage were defined:

➢ Stage 1: Scope Definition – What the TIP should provide (requirements)

➢ Stage 2: Integration - The TIP should work with the existing resources and

infrastructure.

➢ Stage 3: Utilisation - The use of the intelligence.

Then, the design in the first iteration continued by conducting the first interview with “Acme

Corp” staff. The interview provided an overview of what the respondents were expecting from

a TIP. Specifically, it provided an insight to understand their expected requirements. With the

information gathered from the interview, it was possible to expand the “Scope Definition”

stage, which included the following key points:

− Goal or purpose definition

− Level of intelligence needed

− Understand what the intelligence means for the organisation

− How to apply the intelligence

Additionally, the first interview showed that respondents mainly wanted to prevent and detect

attacks, as well as to respond to them. The Vulnerability Management respondent indirectly

suggested that the TIP should have prediction capabilities too:

This information should help us prioritising with what we should patch first. If a

vulnerability can be very easily exploited and it’s currently trending between the

hacking community, it’s likely we should patch it immediately. The main idea is to

have a better visibility of which of our software is more likely to be exploited to

focus our patching efforts.

Thus, a TIP should be able to provide prediction, prevention, detection, or response capabilities,

which matches with the findings from the literature review. Based on the literature review and

the first interview, a rough artefact was designed:

Figure 17: First Rough Artefact.

Scope Definition

• Goal definition

• Intelligence needed

• How to apply the intelligence

• Prediction, prevention, detection, and response capabilities

Integration

• The TIP should work with the existing resources and infrastructure

Utilisation

• The use of the intelligence

38

The first interview also had extra questions only for the respondents involved in the

development of the TIP. The objective of these questions was to define a criteria or expectations

that the artefact of this thesis, the framework, had to meet. The following criteria was identified:

o Generic – The artefact should not be bound to any specific organisation nor IT architecture.

o Comprehensive – The artefact would offer enough guidance; yet it should be easy to follow

even by audiences with no TI experience.

o IT awareness – The artefact shouldn’t ignore the current IT infrastructure already

implemented within the organisation.

Design Evaluation

The goal of this first evaluation was to see if the draft artefact had suitable stages, and

particularly if the first stage was appropriate. Firstly, the rough artefact was shown and

explained to the respondents to ensure that they understood its design. Respondents had full

freedom to ask any questions.

Respondents argued that a self-evaluation mechanism for feedback purposes would be

desirable. This could be used to review if the utilisation of the intelligence had been useful or

not. Additionally, respondents proposed the artefact to be circular, which would provide the

option of continuously improve the Threat Intelligence Program. Therefore, after this meeting,

an extra criterion for the artefact was identified:

o Self-evaluation – The artefact needs to be able to self-evaluate its performance and

efficacy.

Regarding the “Scope Definition” stage, “Acme” staff agreed that the content seemed “fair”;

yet they wouldn’t know how to proceed with it. More work would be needed here in the next

iteration. Respondents commented that would be useful to define KPIs (Key Performance

Indicators) in the first stage, which could be used to measure the TIP’s performance.

5.1.2. Iteration 2

The initial artefact was redesigned based on the feedback from the previous iteration: and arrow

was added to make the artefact “circular” and a feedback stage was added, which would allow

continuous improvement. Additionally, KPI was included in the “Scope Definition” stage.

Design

In the “Scope Definition” stage, which was renamed to “Definition”, defining a terminology

was added as step. As Mavroeidis and Bromander [93] argued, vaguely defined terminology

and lack of standardised representation can bring confusion. Same applies to acronyms. For

example, “VM” acronym may have multiple definitions depending on the context. Does it stand

39

for Virtual Machine? Voice Message? Vendor Management? Therefore, when initiating a

Threat Intelligence Program, creating, and maintaining a list of terminology and abbreviations

with their meanings is encouraged.

A new stage was created called “Planning”, which became the second stage. The second stage

focused on making plans for what needs to be done. These key points were based on the findings

from the literature:

− Defining a budget

− Deciding if a dedicated team is needed

− Assessing existing technologies and process within the organisation

− Defining KPI

In the first feedback, respondents commented that the content of the artefact was “fair”; yet they

wouldn’t know how to proceed with it. To address this, the artefact was divided into 2 parts:

High-Level and Low-Level parts.

High-Level

Low-Level

Figure 189: High-Level Draft.

Table 5: Low-Level Draft.

40

The overall view of the framework (High-Level) showed a circular process which included a

total of 5 stages: definition, planning, integration, utilisation, and evaluation. The framework

being circular provided the option to evaluate the output of the TIP. This was necessary for a

continuous improvement of the program. Additionally, this allowed evaluating new feature

requests or new business needs.

Each stage was explained with more details by providing key action points within a table (Low-

Level). The table included the phase number and its name, a short description of what was to

be done in the phase, and a list of key action points. These key points were taken from the

literature review. The table with details was developed to address the concerns presented by the

respondents in the first iteration design.

Design Evaluation

The new formatting of the artefact was discussed on this second evaluation. Respondents agreed

that an arrow was needed as it allowed continuous development of the TIP. Regarding the Low-

Level part and the extra details added were seemed as improvement. They suggested that giving

examples could further clarify the key points.

5.1.3. Iteration 3

After the second design evaluation, some formatting changes were introduced on High-Level

part. Colours were amended and a subsection was added at the bottom of each stage as

complement. Additionally, in the guidelines section, examples were added where necessary for

further clarification.

Design

In this third iteration, the remaining three stages were developed based on the knowledge learnt

from the literature review: Integration, concerned with integrating the TIP to existing

infrastructure and processes; Utilisation, which refers to the use of the intelligence; and

Evaluation, where feedback is gathered for future improvements.

For the integration stage, ideas from the literature reviewed as well as the expertise from the

respondents. The key points of this phase were that the TIP needs to be aligned with the business

needs and should be able to work with the existing infrastructure and processes.

The utilisation stage explained the process of converting data into intelligence and its usage.

This stage encompassed three main steps:

− Collection of the data from multiple sources: free and/or commercial.

− Processing of the data to intelligence.

− Dissemination is all about making use of the intelligence produced.

41

The evaluation stage included the ability to receive feedback for improving purposes. A TIP

should not be considered a static program and should continuously adapt to meet new

requirements and business needs. In this stage, the key points defined on the first two stages

(definition and planning) were reviewed to ensure that the TIP is aligned with its goal and

objectives. Additionally, mechanisms to evaluate the quality of intelligence were introduced.

Design Evaluation

The completed framework was shown to the respondents for evaluation. Minor comments were

done regarding the formatting and visual presentation which were later amended. Additionally,

one of the respondents suggested adding a list of sources where intelligence could be gathered

from. A graphic was added to visualise this information.

5.2. Final Artefact

The outcome of the design was the final artefact: a framework to establish a Threat Intelligence

Program. The framework contains 5 phases and each phase encapsulates key actions that

describe what needs to be done to establish a Threat Intelligence Program. Each phase describes

a set of actions that can help with the initiation of a Threat Intelligence Program. The phases

are arranged in sequence so that each phase lays the foundation for the succeeding one. The

overall process can be considered circular, which allows continuous improvements and changes

once the final stage has reached.

The framework is presented in two parts – High-Level and Low-Level parts. High-Level gives

an overall representation of the entire framework. On the other hand, Low-Level summarises

the key actions of each phase with tables.

High-Level

Figure 19: High Level View.

42

Low-Level

Table 6: Low-Level View.

Phase 1 – Definition

This phase defines the direction and objectives of the Threat Intelligence Program. TI

practitioners should start by defining a formal terminology and establishing the scope of the

project.

➢ Define a common terminology and abbreviations

➢ State a scope with what is to be achieved

➢ Set directions and objectives to have a common vision and clear goal

➢ Describe the capabilities (prediction, prevention, detection, response) that are

required

➢ Select the intelligence levels (strategic, tactical, technical, operational) expected

➢ Define the audience and their deliverables

Phase 2 – Planning

This phase involves the process of making plans for what needs to be done. This section

considers the budget, if a dedicated team is needed, and the evaluation of current security

capabilities within the organisation

➢ Calculate the program budget. This should include labour, tools, trainings, and

operating costs

➢ Work within the budget

➢ Build a dedicated team if needed

➢ Find tailored TI training for staff

➢ Assess current capabilities by listing the technologies and existing IT processes in

place within the organisation

➢ Set measurable Key Performance Indicators

Phase 3 – Integration

This phase involves the process of integrating the TIP into the existing infrastructure and

processes.

➢ Follow the defined direction and objectives from stage 1

➢ Implement the plans from stage 2

➢ Ensure that TIP works with the existing technologies

➢ Align with current business and IT processes, procedures, and policies

43

Phase 4 – Utilisation

This phase encompasses the collection, exploitation, and dissemination of the intelligence

➢ Select suitable sources for the acquisition and provisioning of data

➢ Normalise data by eliminating redundancies and ensuring that the data makes sense.

Additionally, the data should be converted into a common format or standard to

ease its storage and sharing.

➢ Enrich data by adding extra information like DNS, malware intelligence, WHOIS,

geolocation, and sandboxes

➢ Disseminate the intelligence to the relevant stakeholders and security controls, in a

timely manner and easy way to be consumed.

➢ Comply with any privacy and trust requirements

➢ Automatise any action as much as possible

Phase 5 – Evaluation

This phase refers to the ability to receive feedback for improving purposes. TI should not be

considered a static program and continuously adapts to meet new requirements.

➢ Ensure that the required capabilities (prediction, prevention, detection, response) are

being provided

➢ Ensure that the TIP is providing the required intelligence levels (strategic,

operational, tactical, technical)

➢ Identify any roadblocks and find solutions

➢ Review if KPIs are met

➢ Measure the quality of data using the RAAT (relevant, accurate, actionable, timely)

dimension

➢ Collect feedback from the stakeholders

➢ Consider new feature requests or new business needs

44

6. IMPLEMENTATION

This sixth chapter explains how to implement the final artefact. As shown on the previous

chapter, the developed framework contains 5 phases, and each phase encapsulates key actions.

Details are provided to describe what needs to be done to implement the artefact and establish

a Threat Intelligence Program.

Phase 1: Definition

Definition is the first phase and defines the direction and objectives of the Threat Intelligence

Program. This phase includes the definition of a formal terminology and the scope of the

program.

Terminology

The very first step is defining formal terminology. A list of terminology and/or acronyms with

their meanings is encouraged. This is not limited to Threat Intelligence terms, but also to any

IT buzzword, or company specific abbreviations. This will avoid confusions and

miscommunications.

Scope Definition

Next step is the definition of what is to be achieved with the TIP. Intelligence without purpose

holds the same value as not having intelligence at all. Goals and objectives need to be clearly

defined. Practitioners need to understand what the intelligence means for the organisation, how

it’s to be applied, and what the desirable outcomes are. This means that data needs to have a

purpose and hold a strategic purpose to gain advantage. Conversely, when everything is

intelligence nothing is intelligence [94]. There needs to be a clear use of the data, otherwise

data is only “noise”.

Threat Intelligence aims identifying the attackers and understanding how they operate. This

intelligence ultimately enables defenders to take faster and more informed decisions to

proactively protect themselves. However, directions and objectives need to be developed to

have a common vision and clear goal. Practitioners should ask themselves the following

questions:

-Why is a TIP needed?

Practitioners should have a clear objective of what it’s going to be achieved. This does not need

to be specific, but it should have an overall view of the common goal. This goal will vary

depending on the nature of the organisation. For example, a small local company may not be

interested in intelligence about geopolitical terrorism; but they may want to automatically block

45

known malicious IoCs that affect the software they use on their firewalls. Thus, organisations

may start by choosing the capabilities that they expect from the TIP.

A successful TIP should provide prediction (identify what will attack next), prevention

(information to harden security gaps), detection (threat monitoring and detection), or response

(investigation of incidents) capabilities. Each capability has different objectives and

organisations should decide first which ones they want.

-What kind of intelligence do we expect/need?

Once the organisation has decided which capabilities they want to achieve, they may continue

by choosing the expected type of intelligence level the TIP should provide. As explained on

chapter 2, four types of intelligence levels exist: Tactical, Operational, Strategic, and Technical.

Each intelligence level provides different objectives to different stakeholders as represented on

below figure:

Figure 201: Intelligence Levels.

The Strategic Intelligence’s aim is to understand the trends and adversarial motives from a high-

level point of view to support strategy and business decisions. This intelligence shouldn’t

provide technical data.

If the main objective of the TIP is to have information about attacks to detect and remediate

them, then Technical Intelligence is what the TIP should provide. This intelligence is used

immediately, and its main function is to feed the organisation’s SIEM, IDS/IPS (Intrusion

Detection Systems / Intrusion Prevention Systems), and/or firewalls (FW) with IoCs, such as

malicious IPs, URLs, or hashes in order to stop the attacks. This information can also be useful

to SOC analysts as well as other Blue Teams (defence teams) within the organisation. For

example, incident responders can start identifying the type of attack that hit the infrastructure

by analysing the IoCs.

STRATEGIC

High-level intelligence covering trends and adversarial motives for decision-making

Stakeholders: C-Suite, CISO, CIO, CTO

TACTICAL

Low-level information about adversaries' TTP to understand how attacks are conducted

Stakeholders: SOC, Hunters, IRT

TECHNICAL

Collect specific details of the attacks for blocking purposes

Stakeholders: SOC, SIEM, FWs, IDS/IPS

OPERATIONAL

Engage in campaign tracking and threat actor profiling to prioritise operations

Stakeholders: VM, SOC

46

Operational Intelligence refers to specific threats against the organisation by tracking

campaigns and profiling threat actors. For instance, a financial company may desire to track

campaigns against the banking industry and profile threat actors that have specialised on these

attacks. If defenders know who is more likely to target them, then they can respond accordingly.

This type of intelligence is normally used to conduct targeted and prioritised operations.

While Operational Intelligence focuses on the “who” and “why”, Tactical Intelligence tries to

clarify the “how”. The goal of Tactical Intelligence is to obtain a broader perspective of the

threats to understand adversaries’ TTPs. In other words, the idea is to understand the attackers’

capabilities, including tools and methods, to better defend the IT infrastructure by prioritising

security operations. The common stakeholders of this intelligence are SOC analysts, threat

hunters, and incident response teams.

-Who is going to use the intelligence?

Inside the scope, the audience or stakeholders of the intelligence should be defined too. TI

practitioners should understand each stakeholder’s needs as they do not require the same

intelligence; thus, the intelligence level and the output or deliverables from the TIP should be

tailored to the audience. Generally, upper management will need high-level intelligence

whereas most technicians will require low-level intelligence. This should be documented.

Below table depicts how this may be documented by providing some examples:

Table 7: Stakeholder examples.

STAKEHOLDER LEVEL DELIVERABLES

Security Operations Technical IoCs

Tactical Alerts for monitoring

Vulnerability Management Operational Adversary exploits

Tactical High-risk vulnerabilities Incident Response Tactical Adversary methods

IT Management Strategic Analyst briefings

Intelligence reports Operational Infrastructure risks

Executives Strategic Geopolitical threats and trends

Legal Team Operational Brand monitoring Fraud Prevention

If the intelligence is to be shared with 3rd parties such as clients, providers, business partners or

any external organisation, existing laws and regulations should be taken into consideration to

avoid any legal ramifications. Organisations are highly encouraged to consult their Legal

Departments or specialised legal advisors in the matter.

Phase 2: Planning

The planning phase is concerned with making plans for what needs to be done to start

establishing a TIP. This section considers the budget, if a dedicated team is needed, the

47

evaluation of existing security capabilities within the organisation, and the definition of Key

Performance Indicators (KPI).

Budget

A project budget is needed to calculate the total projected cost to complete and run the Threat

Intelligence Program. This should include labour, tools, trainings, and operating costs. The

project budget should be a dynamic document and continuously updated during the

development of the program. This document will determine how much the TIP is likely to cost

and check whether the project is within the stipulated budget.

Dedicated Team

A dedicated team with the sole focus of providing valuable and actionable intelligence is

recommended. Teams are what drives success. People who share the same goal and work

together achieve better results.

Figure 212: Core Compentencies and Skills. Extract from [95].

48

The skills and core competencies of an intelligence analyst vary but generally they require core

competences in computing, information security and technical exploitation [95]. Additionally,

analysts need to possess communication and writing skills as well as data collection and

examination abilities [95]. Above figure, which is an extract of the research conduct at Carnegie

Mellon University’s Software Engineering Institute [95], summarises the key competencies and

skills.

Dedication and a broad perspective ensure that analysts allocate enough time and effort. The

Threat Intelligence team could either be independent or belong to another security group. An

independent group may have greater autonomy, yet it may lack the deep knowledge or

perspective from other security departments. Nevertheless, staff in the TI team need to

understand the business objectives, workflows, and procedures as well as the technicalities of

the infrastructure [33].

Assess Current Capabilities

When starting a TIP understanding the organisation’s state is also necessary. In the contrary to

other security products, a TIP needs to be integrated with existing security systems. Hence, the

TIP should work with the current systems and act as a mediator to eliminate manual operations.

To accomplish this, the TIP project owners need to list the technologies used and the existing

IT processes in place within the organisation. Asset inventories can be of great help and TIP

project owners should have a general idea of what’s available in the IT infrastructure:

− Firewalls

− Proxies

− SIEM

− Vulnerability Management tool

− Clients and servers operating systems

− Software

This information can help narrowing down the intelligence the organisation needs. If the

organisation’s firewalls are from FortiGate, it may not be necessary to track current attacks

against pfSense firewalls, for example. Furthermore, this information will also help with the

integration stage.

KPI Definition

As the TIP grows and evolves, it will be important to demonstrate what the organisation is

getting out of the investment. Practitioners need to show that the goals are being met, which

means quantifiable measures are required. The use of KPIs (Key Performance Indicators) is one

approach to demonstrate the return on security investment [28]. In other words, KPIs may be

used to evaluate the success of the Threat Intelligence Program. Every KPI should be related to

a specific goal with a performance measure as figure 23 represents:

49

Figure 22: KPI for Assessing Return on Investment. Extract from [28].

For example, if TIP objective is to prevent incidents, the KPI may count the number of incidents

detected and number of IoCs detected. On the other hand, if TIP is to be used to prevent attacks,

counting them can be a performance indicator. The number of vulnerabilities identified by the

TIP before their public release or the number of patched assets against critical vulnerabilities

can be used as indicators too. Another key area to consider is how long it took to reach a

conclusion from the detection and analysis of the intelligence. Additionally, how long it took

to react to the information may be used as KPI.

Phase 3: Integration

This third stage is concerned with the realisation of the TIP. Everything that was defined on

stage 1 should be materialised by following the plans from stage 2. How this is done is out of

the scope of this framework as it’s specific to each organisation; however, some of the common

examples include:

o Producing intelligence that the organisation will use, as defined on stage 1.

o Follow the plans from stage 2.

o Integrate threat intelligence with existing security solutions (SIEM and IDS/IPS, etc)

o Send the indicators to proxies and firewalls for blocking or correlated against

information in the SIEM. All these should be done ideally without human intervention.

KPI

Incident Detection

Number of incidents detected

Number of IoCs detected

Foiled Attacks Attacks prevented

Vulnerabilities

Number of vulnerabilities identified

prior to public announcements

Number of affected assets patched

Conclusion Time

Reponse and Remediation Time

50

o Ensure that the TIP is aligned with existing processes, procedures, and policies.

If any of the defined requirements or planned actions cannot be done, identify the blockers and

document the reason(s) of not being able to complete them. This can be useful in the Evaluation

stage to demonstrate why the objective or KPI couldn’t be met. Additionally, this could be also

used to request further resources for the next iteration.

Phase 4: Utilisation

The fourth phase encompasses the collection, exploitation, and dissemination of the intelligence

as table 8 summarises:

Table 8: Utilisation Phase.

STAGE STEP OPTIONS

4. Utilisation

Collection

Processing Data Normalisation

Data Aggregation Data Enrichment

Dissemination Easy Sharing Automatic and Manual

Timely Real-time, frequency

Formatting PDF, CSV, etc.

Privacy and Trust Anonymous

Sensitive Data

Collection

Collection is the step that is focused on acquiring and provisioning the data. Multiple sources

exist, both commercial and free, that can be used to collect information as represented on figure

24:

Sourc

es

Commercial

TI Provider

TI Platforms

Free

OSINT

Security Advisories & Bulletins

Vendors

Governmental

Social Media

Open-Source TI Platforms

Search Enginees

Figure 234: TI Data Sources.

51

Organisations may purchase commercial platforms and providers. A TI platform is a piece of

software that organises threat intelligence feeds (a collection of intelligence from multiple

sources) into a single stream [103]; however, they do not provide context and require extensive

manual analysis. In contrast, a TI provider does offer contextualised intelligence that is both

easy to digest and already actionable [103]. This means that the TI provider removes much of

the analyst’s manual tasks. Additionally, the intelligence is relevant and aligned with the

business objectives. All these functions offered by TI providers, however, do come at a higher

cost.

Multiple free sources are available too. Security advisories and bulletins provided by vendors,

such as Cisco Talos and IBM X-Force, and governmental like CISA and CERT-EU are a good

start to gathering feeds. Social media accounts specialised on providing intelligence are also

available. In addition to the commercial TI platforms, different open-source projects exist like

MISP and OpenCTI. Another option is the use of Open Source Intelligence (OSINT). OSINT

can be defined as the process of collecting information that can be gathered from public sources

such as forums and blogs, publications, public government data and social networks [96]. It can

be applied to spot illegal intentions at early stage [97], as a source for tracebacks in forensic

investigations after a breach [98], and to support effective defences and prompt reactions [99].

Multiple open-source and free tools have been developed with diverse purposes like brand

monitoring, VIP tracking, and password leak notifications to mention a few. Additionally,

search engines can be used as OSINT tools and should not be underestimated to find security

holes.

If budget is not a constrain, a combination of both commercial and free sources is

recommended. Yet, collecting data from the right sources is critical to avoid information

overload and manual investigations. Furthermore, TI analysts should focus on two key

concerns: context and time. The information should have context to the organisation, this means

it should be relevant. It doesn’t make sense monitoring attacks against software XYZ if the

organisation doesn’t use it. Time is also a factor to be considered. This not only refers to the

age of the information, but also the amount of time it takes to process it. This brings us to the

next step after data collection: its processing.

Processing

Processing is the step that comes after the data has been collected from numerous sources and

its objective is to convert the data into intelligence. To achieve this, data needs to be normalised,

aggregated, and enriched.

Data needs to be normalised, this optimises the data by eliminating redundancies and ensuring

that the data makes sense [13]. At this stage, data that is no longer malicious, does not have

enough threat merit to action, or it is no longer relevant is discarded too. Additionally, the data

should be converted into a common format or standard to ease its storage and sharing.

52

Then, data may be aggregated by combining two or more pieces of data into a single one. The

purpose of aggregation is to reduce the number of objects to reduce storing and computing cost,

change the scale or to make the data more “stable” as aggregated data tends to have less variety

[105].

Data enrichment improves the quality of the data by adding extra information. Main enrichment

sources are DNS, malware intelligence, WHOIS, geolocation, and sandboxes to mention a few

[13]. Common examples of enrichment are adding the WHOIS information to an IP, including

what services the IP hosts, or, as shown on below figure, if the IP is flagged on any blacklist:

All these steps within the processing should be carried out in an automated way due to its high

resource and time cost. Manual preprocessing may be suitable only for specific cases, for

example, when tailored intelligence is needed. However, the TIP should have the capability to

automatically preprocess the data.

Dissemination

At this step, the intelligence is ready to be used as defined on the first stage of this framework.

Additionally, the intelligence should provide the required capabilities (prediction, prevention,

detection, or response) and meet the expectations from stage 1.

Then, the intelligence needs to be shared to the relevant stakeholders as well as to the

organisation’s security controls [13]. The TIP should mechanisms to easily share the

intelligence to the stakeholders, and automatically if possible. In addition, the dissemination

should meet timing requirements. For instance, if an IP has been deemed to be malicious, it

should be blocked immediately. Moreover, the TIP should have the ability to share the

information in the formats (PDF, CSV, STIX, etc.) that the organisation requires. A CSV may

be a good way to share the information with the technical staff, yet executives may prefer a

PDF.

Figure 245: Data Enrichment of an IP. Extract from AbuseIPDB.com.

53

If the intelligence is to be shared with other organisations, the TIP should have mechanisms to

share the intelligence anonymously, and any sensitive data may need replacing to maintain

privacy and trust.

Phase 5: Evaluation

The Threat Intelligence Program should have the ability to receive feedback for improving

purposes, which is the objective of the fifth and final phase. Key areas to consider are four:

• Goals: Have the goals been achieved? In other words, is the TIP providing the expected

capabilities (prediction, prevention, detection, response) and intelligence levels (strategic,

operational, tactical, technical)? If not, the reason or roadblock should be identified. For

instance, prevention capability may not have been achieved because IoCs are not being

blocked automatically. Perhaps the firewall used by the organisation does not allow native

integration with the TIP. A possible solution could be to develop an in-house API

(Application Programming Interface) to overcome the limitation. It is critical to understand

why the TIP is not delivering what it promised in order to find a solution and fix the issue.

• KPI: Is the TIP meeting the defined KPIs? If not, why? For example, if many of the actions

are done manually during an investigation, the reaction time will be considerably high,

which may indicate an issue that needs addressing.

• Quality of Intelligence: The literature has reviewed different methods to measure the quality

of the data like Protiviti's approach [74], Bianco's "Pyramyd of Pain" [75], or the "Detection

Maturity Level" developed by Stillion (2014). Any of these or other method may be used;

however, as NIST recommends, the intelligence should at least be relevant, accurate,

actionable, and timely [41]. Table 9 summarises the main dimensions used to measure the

quality of the intelligence:

Table 9: Dimensions to Measure Data Quality.

Dimension Description Questions

R Relevant

The intelligence has context

and is valid to the

organisation

1) Do analysts understand the business and its

operations?

2) Do the data sources identify threats to the

organisation?

A Accurate

The intelligence is correct

and contains the right value

1) How is accuracy corroborated?

2) Is the intelligence updated when new data

is received?

A Actionable

The intelligence is of value

and provides sufficient detail

to be usable

1) Is the intelligence being used?

2) Does the intelligence provide enough

details?

T Timely

The intelligence needs to be

produced and delivered fast

1) How long does it take to produce and

deliver the intelligence?

2) How is the intelligence provided to ensure

fast delivery?

54

Poor quality of intelligence may be worse that no intelligence at all. Poor quality may lead

to bad decisions and waste precious resources. Furthermore, it can potentially yield

detrimental consequences to the organisation. Thus, measuring the quality of the

intelligence is a must.

• Feedback: Direct or indirect feedback from the stakeholder can also give valuable

information to improve the TIP. The TIP should have the ability of collecting feedback from

the stakeholders in an easy way. For instance, if the intelligence stakeholders received is

not relevant, they should be able to flag it as such. Additionally, stakeholders should have

the option to request new features or give additional comments. Ultimately, if the

stakeholders are not satisfied with the TIP, they will simply ignore and not use the

intelligence. Therefore, their opinion should not be overlooked.

55

7. RESULTS & DISCUSSION

This chapter presents the results of ex-ante evaluation, concerned with the validation of the

research, and ex-post evaluation, which measures the artefact. This provides an evaluation of

the artefact built in this thesis as well as validating that the research followed a disciplined

approach. At the end of the chapter, directions of future research are suggested.

7.1. Ex-Ante Evaluation

Ex-ante evaluation section focused on the knowledge creation process and was conducted prior

the design and implementation of the artefact. The aim of Ex-ante was to produce an acceptable

problem statement and research gap by observing a problem and looking for existing solutions

in the literature.

Firstly, an initial literature review was conducted to gain foundational knowledge and identify

a problem within the Threat Intelligence field. Many areas requiring improvement were found

at this stage; yet a key problem that seemed overlooked was how to start and establish a Threat

Intelligence Program. Once the problem was detected, the literature was again reviewed with

the aim of finding what had been done to address it. As demonstrated in the literature review

chapter, no existing solutions were found. Consequently, the literature review not only found a

problem but also demonstrated that a research gap existed. With the findings, a problem

statement and a research gap were produced. This justified the need of this study.

7.2. Ex-Post Evaluation

Ex-post measured how well the artefact created worked. The artefact was build following the

DSR methodology: a real-word problem which insufficient sets of models or methods was

identified, a framework was built relying on intuition, expert opinion and trial-and-error, and

evaluation methods were applied to assess the final artefact.

To evaluate the artefact a demonstration was performed. The demonstration meant to include a

real-life case scenario where the artefact was tested and analyse if the criteria defined by expert

opinion was met. However, due to difficulties out of control of the respondents, the TIP from

“Acme” organisation had to be postponed. This meant that the real-life case scenario was no

longer possible. Therefore, only the criteria set to the artefact was reviewed. Below table

summarises how the criteria was met:

Table 10: Ex-Post Evaluation.

Criteria How is met?

Generic

The artefact is not technology-specific nor tailored for a particular industry or organisation type. The framework is intended to be used in any

organisation regardless of infrastructure and business.

56

Comprehensive

The artefact is divided into two parts: A, which shows an overall view of the

artefact; and B, which gives a short description of the phase and contains key action of what needs to be done. Further explanations with examples are also

presented to complement each phase.

IT Awareness

The “planning” phase of the artefact requires the identification and

evaluation of current capabilities within the organisation. In order to integrate the TIP with the organisation, the phase recommends listing the

technologies and IT process already in place

Self-evaluation

“Evaluation” phase was included with this purpose. Here, if the goals and KPIs have been met is reviewed. Also, the quality of the intelligence is

reviewed. The artefact suggests different methods like the Pyramid of Pain

and proposes using the RAAT (Relevant, Accurate, Actionable, Timely)

dimension. Additionally, the artefact endorses implementing direct or indirect feedback methods for continuous improvements.

The four criteria defined during the design process were achieved by the artefact. Generic was

met by making the artefact not-technology specific nor tailored for any organisation. The

artefact had both overall view and details with further explanations, making it comprehensive.

The artefact considers existing infrastructure and processes for an ease integration. And finally,

the artefact has evaluation capabilities for continuous improvements.

7.3. Future Research

The original plan was to test the produced artefact with a real-life scenario; however, at the end,

due to constrains out of control of the thesis, the artefact was tested through evaluation and

validation only. Although the evaluation gave positive results as all the defined criteria were

met, for future research, a case scenario would be recommended to see if the framework works

in a real-life project. A case-scenario testing could validate that the artefact works by testing its

complete functionality. It could not only improve current functionalities but also detect other

points that were not previously considered. Thus, a case scenario is highly recommended.

Further research could also be conducted in the integration phase of the artefact. For example,

a taxonomy of integration could be analysed, developed, and documented with this purpose.

This could help TI practitioners with the integration of the TIP with their existing infrastructure.

Further research in legal constrains can be another subject for upcoming investigations. For

instance, how to handle legal obligations when sharing intelligence could be further explored.

This could include privacy and data protection laws, antitrust law, or intellectual property law

for example. Another interesting topic related to legal could be how a global organisation could

handle multiple jurisdictions to avoid any liability.

57

8. CONCLUSION

Threat Intelligence (TI) is a field used in Information Security (IS) to help the decision-making

process by providing evidence-based knowledge [4]. To achieve this, TI gathers vital

intelligence about the adversaries, their methods, and techniques. The information that TI

provides can aid reinforcing the defences in the IT infrastructure.

TI is a growing topic not only in academia but also within private companies. Organisations are

quickly adopting TI with the aim of protecting their infrastructure; yet many fail to use it

properly. Most firms do not have any plans nor requirements for their TI Programs. Thus, IT

security professionals are flooded with too much or irrelevant data, which has led them to ignore

the intelligence they receive.

As demonstrated on the literature review, research papers have studied the applications of

Threat Intelligence and proved the advantages it provides; still, how to initiate a collection of

processes and procedures to make Threat Intelligence actionable within an organisation has

been overlooked [11],[14],[15]. Some work has been done to address this gap, but the

advancements have been small. To address this gap in the literature, this research has developed

a framework that can guide firms in their quest of starting their own Threat Intelligence Program

(TIP), which can be integrated to any organisation. The framework can help organisations

defining their TIP requirements and appropriately operationalising intelligence work to support

different Information Security processes.

The developed framework on this thesis provides foundation on the establishment of a TIP

within organisations. To achieve this, the framework has been made generic so it can be used

in any organisation. Additionally, the artefact is divided into two parts: a part that gives an

overall view of the whole process, and another part that provides key actions and further details.

This division is what makes the framework comprehensive. Also, the framework has IT

awareness capabilities, which means that the framework considers existing technologies and

processes when developing a TIP. Furthermore, the framework seeks for continuous

improvement of the TIP by requiring self-evaluation mechanisms. These evaluation

mechanisms not only evaluate the intelligence produced by the TIP, but also check that the TIP

goals and KPIs are met.

The thesis not only developed an artefact to provide foundation on the establishment of a TIP

within most organisations, but also presented a list of evaluation parameters that can be used to

measure the success of the establishment of a TIP. Three main parameters were identified from

the literature: Quality of Intelligence, which measures the value of the output produced by the

TIP; Intelligence Usage, which evaluates how the intelligence is consumed and applied; and

Legal, aspects concerned with legal requirements.

58

Consequently, this thesis addresses a clear gap in how to establish a TIP by presenting a

framework. The framework provides guidance on how a TIP can be established to

accommodate Information Security processes within organisations. Additionally, the research

also explored current literature to present a list of parameters to evaluate the success of the

establishment of a TIP.

59

REFERENCES

[1] W. Tounsi, "What is Cyber Threat Intelligence and How is it Evolving?" Cyber‐Vigilance

and Digital Trust: Cyber Security in the Era of Cloud Computing and IoT, pp. 1-49,

2019.

[2] L. Giles, Sun Tzǔ on the Art of War: The Oldest Military Treatise in the World. 1910.

[3] J. Zhao et al, "TIMiner: Automatically extracting and analyzing categorized cyber threat

intelligence from social data," Comput. Secur., vol. 95, pp. 101867, 2020.

[4] M. Parmar and A. Domingo, "On the use of cyber threat intelligence (CTI) in support of

developing the commander's understanding of the adversary," in MILCOM 2019-2019

IEEE Military Communications Conference (MILCOM), 2019, .

[5] R. Williams et al, "Incremental hacker forum exploit collection and classification for

proactive cyber threat intelligence: An exploratory study," in 2018 IEEE International

Conference on Intelligence and Security Informatics (ISI), 2018.

[6] A. Ramsdale, S. Shiaeles and N. Kolokotronis, "A Comparative Analysis of Cyber-Threat

Intelligence Sources, Formats and Languages," Electronics, vol. 9, (5), pp. 824, 2020.

[7] M. Bromiley, "Threat intelligence: What it is, and how to use it effectively," SANS

Institute InfoSec Reading Room, vol. 15, pp. 172, 2016.

[8] M. Gschwandtner et al, "Integrating threat intelligence to enhance an organization's

information security management," in Proceedings of the 13th International

Conference on Availability, Reliability and Security, 2018.

[9] W. Tounsi and H. Rais, "A survey on technical threat intelligence in the age of

sophisticated cyber attacks," Comput. Secur., vol. 72, pp. 212-233, 2018.

[10] R. Brown and R. M. Lee, "The Evolution of Cyber Threat Intelligence (CTI): 2019

SANS CTI Survey," SANS Institute: Singapore, 2019.

[11] ENISA, "Threat Landscape 2020 - Cyber threat intelligence overview," 2020.

[12] FireEye, "The History of OpenIOC", 2021. Available:

https://www.fireeye.com/blog/threat-research/2013/09/history-openioc.html.

[13] ENISA, "Exploring the opportunities and limitations of current Threat Intelligence

Platforms," 2017.

[14] T. D. Wagner et al, "Cyber threat intelligence sharing: Survey and research directions,"

Comput. Secur., vol. 87, pp. 101589, 2019.

[15] G. Takacs, "Integration of CTI into security management," 2019.

60

[16] Ponemon Institute, "The Value of Threat Intelligence: Annual Study of North American

& United Kingdom Companies" -02, 2019.

[17] Y. Desmedt, "Potential impacts of a growing gap between theory and practice in

information security," in Australasian Conference on Information Security and

Privacy, 2005.

[18] P. Runeson, "It takes two to tango--an experience report on industry--academia

collaboration," in 2012 IEEE Fifth International Conference on Software Testing,

Verification and Validation, 2012.

[19] P. Grünbacher and R. Rabiser, "Success factors for empirical studies in industry-

academia collaboration: A reflection," in 2013 1st International Workshop on

Conducting Empirical Studies in Industry (CESI), 2013, .

[20] A. Sandberg, L. Pareto and T. Arts, "Agile collaborative research: Action principles for

industry-academia collaboration," IEEE Software, vol. 28, (4), pp. 74-83, 2011.

[21] M. S. Abu et al, "Cyber threat intelligence–issue and challenges," Indonesian Journal of

Electrical Engineering and Computer Science, vol. 10, (1), pp. 371-379, 2018.

[22] J. Van Bon et al, Foundations of IT Service Management Based on ITIL®. 20083.

[23] FBI IC3, "FBI: Internet Crime Report 2020," Computer Fraud & Security, vol.

2021, (4), pp. 4, 2021. Available: https://dx.doi.org/10.1016/S1361-3723(21)00038-5.

DOI: 10.1016/S1361-3723(21)00038-5.

[24] EC-Council. "The Status of the Threat Intelligence Market in 2020", 2020. Available:

https://blog.eccouncil.org/the-status-of-the-threat-intelligence-market-in-2020/.

[25] Google. "Google Trends", 2021. Available:

https://trends.google.com/trends/explore?date=all&q=threat%20intelligence.

[26] Dimensions. "Timeline - Overview for Threat Intelligence Fields of Research: 08

Information and Computing Sciences, 0806 Information Systems in Publications –

Dimensions" , 2021. Available:

https://app.dimensions.ai/analytics/publication/overview/timeline?search_mode=conte

nt&search_text=%22threat%20intelligence%22&search_type=kws&search_field=text

_search&or_facet_for=2208&or_facet_for=2790.

[27] P. Gao et al, "Enabling Efficient Cyber Threat Hunting With Cyber Threat Intelligence,"

arXiv Preprint arXiv:2010.13637, 2020.

[28] C. Onwubiko, "Security operations centre: Situation awareness, threat intelligence and

cybercrime," in 2017 International Conference on Cyber Situational Awareness, Data

Analytics and Assessment (Cyber SA), 2017.

[29] N. Moustafa et al, "A new threat intelligence scheme for safeguarding industry 4.0

systems," IEEE Access, vol. 6, pp. 32910-32924, 2018.

61

[30] H. Zhang et al, "Network attack prediction method based on threat intelligence for IoT,"

Multimedia Tools Appl, vol. 78, (21), pp. 30257-30270, 2019.

[31] D. Chismon and M. Ruks, "Threat intelligence: Collecting, analysing, evaluating," MWR

InfoSecurity Ltd, 2015.

[32] W. Bautista, Practical Cyber Intelligence: How Action-Based Intelligence can be an

Effective Response to Incidents. 2018.

[33] C. Pace, "The Threat Intelligence Handbook: A Practical Guide for Security Teams to

Unlocking the Power of Intelligence," Annapolis, CyberEdge Group, 2018.

[34] R. D. Steele, "Open source intelligence," Handbook of Intelligence Studies, vol. 42, (5),

pp. 129-147, 2007.

[35] Z. Iqbal and Z. Anwar, "SCERM—A novel framework for automated management of

cyber threat response activities," Future Generation Comput. Syst., vol. 108, pp. 687-

708, 2020.

[36] M. Conti, T. Dargahi and A. Dehghantanha, "Cyber threat intelligence: Challenges and

opportunities," in Cyber Threat Intelligence 2018.

[37] O. Catakoglu, M. Balduzzi and D. Balzarotti, "Automatic extraction of indicators of

compromise for web applications," in Proceedings of the 25th International

Conference on World Wide Web, 2016, .

[38] C. Sabottke, O. Suciu and T. Dumitraș, "Vulnerability disclosure in the age of social

media: Exploiting twitter for predicting real-world exploits," in 24th {USENIX}

Security Symposium ({USENIX} Security 15), 2015.

[39] M. Ebrahimi, C. Y. Suen and O. Ormandjieva, "Detecting predatory conversations in

social media by deep convolutional neural networks," Digital Investigation, vol. 18,

pp. 33-49, 2016.

[40] I. Deliu, C. Leichter and K. Franke, "Extracting cyber threat intelligence from hacker

forums: Support vector machines versus convolutional neural networks," in 2017

IEEE International Conference on Big Data (Big Data), 2017.

[41] C. Johnson et al, "Guide to cyber threat information sharing," NIST Special Publication,

vol. 800, (150), 2016.

[42] E. Kim et al, "CyTIME: Cyber threat intelligence ManagEment framework for

automatically generating security rules," in Proceedings of the 13th International

Conference on Future Internet Technologies, 2018.

[43] GitHub. "Introduction to STIX", 2021. Available:

https://oasis-open.github.io/cti-documentation/stix/intro.

[44] GitHub. "Introduction to TAXII", 2021. Available:

62

https://oasis-open.github.io/cti-documentation/taxii/intro.

[45] S. Barnum, "Standardizing cyber threat intelligence information with the structured

threat information expression (stix)," Mitre Corporation, vol. 11, pp. 1-22, 2012.

[46] L. Dandurand and O. S. Serrano, "Towards improved cyber security information

sharing," in 2013 5th International Conference on Cyber Conflict (CYCON 2013),

2013.

[47] S. Murdoch and N. Leaver, "Anonymity vs. trust in cyber-security collaboration," in

Proceedings of the 2nd ACM Workshop on Information Sharing and Collaborative

Security, 2015.

[48] C. Wagner et al, "Misp: The design and implementation of a collaborative threat

intelligence sharing platform," in Proceedings of the 2016 ACM on Workshop on

Information Sharing and Collaborative Security, 2016.

[49] P. Pawlinski et al, "Actionable Information for Security Incident Response", Tech. rep.

ENISA. 2015.

[50] P. Kampanakis, "Security automation and threat information-sharing options," IEEE

Security & Privacy, vol. 12, (5), pp. 42-51, 2014.

[51] G. D. P. Regulation, "Regulation EU 2016/679 of the European Parliament and of the

Council of 27 April 2016," Official Journal of the European Union.Available at:

Http://Ec.Europa.Eu/Justice/Data-Protection/Reform/Files/Regulation_oj_en.Pdf,

2016.

[52] L. Sweeney, "Operationalizing American Jurisprudence for Data Sharing,", Technical

Report, 2013.

[53] T. Breaux and A. Antón, "Analyzing regulatory rules for privacy and security

requirements," IEEE Trans. Software Eng., vol. 34, (1), pp. 5-20, 2008.

[54] Dutch National Centre of Expertise and Repository for Research Data. "Tagging

Privacy-Sensitive Data According to the New European Privacy Legislation: GDPR

DataTags – a Prototype", 2017. Available: https://dans.knaw.nl/en/current/first-gdpr-

datatags-results-presented-in-workshop.

[55] A. Albakri, E. Boiten and R. De Lemos, "Sharing cyber threat intelligence under the

general data protection regulation," in Annual Privacy Forum, 2019.

[56] J. v. Brocke et al, "Reconstructing the giant: On the importance of rigour in documenting

the literature search process," 2009.

[57] D. R. Cooper, P. S. Schindler and J. Sun, Business Research Methods. 20069.

[58] Y. Levy and T. J. Ellis, "A systems approach to conduct an effective literature review in

support of information systems research," 2006.

63

[59] R. T. Watson and J. Webster, "Analysing the past to prepare for the future: Writing a

literature review a roadmap for release 2.0," Journal of Decision Systems, vol. 29, (3),

pp. 129-147, 2020.

[60] D. Nicholas et al, "Peer review: Still king in the digital age," Learned Publishing, vol.

28, (1), pp. 15-21, 2015.

[61] J. P. Tennant, "The state of the art in peer review," FEMS Microbiol. Lett., vol. 365, (19),

pp. fny204, 2018.

[62] ENISA. "About ENISA - The European Union Agency for Cybersecurity", 2021.

Available: https://www.enisa.europa.eu/about-enisa.

[63] ENISA. "ENISA Threat Landscape through the years", 2021. Available:

https://www.enisa.europa.eu/topics/threat-risk-management/threats-and-trends/enisa-

threat-landscape.

[64] ENISA. "Publications", 2021. Available: https://www.enisa.europa.eu/publications.

[65] NIST. "NIST General Information", 2021. Available:

https://www.nist.gov/director/pao/nist-general-information.

[66] Cyber Threat Alliance. "CTA Infographic", 2021. Available:

https://www.cyberthreatalliance.org/resources/cta-infographic/.

[67] Cyber Threat Alliance, "Solutions Fact Sheet," 2020. Available:

https://www.cyberthreatalliance.org/resources/cta-solutions-fact-sheet/

[68] SANS. "About", 2021. Available: https://www.sans.org/about/?msc=main-nav.

[69] P. N. Tandon, "Impact Sans Impact Factor," National Academy science letters, 38, 521-

527. 2015.

[70] S. Brown, J. Gommers and O. Serrano, "From cyber security information sharing to

threat management," in Proceedings of the 2nd ACM Workshop on Information

Sharing and Collaborative Security, 2015, .

[71] B. Shin and P. B. Lowry, "A review and theoretical explanation of the ‘Cyberthreat-

Intelligence (CTI) capability’that needs to be fostered in information security

practitioners and how this can be accomplished," Comput. Secur., vol. 92, pp. 101761,

2020.

[72] C. Sillaber et al, "Data quality challenges and future research directions in threat

intelligence sharing practice," in Proceedings of the 2016 ACM on Workshop on

Information Sharing and Collaborative Security, 2016, .

[73] O. Al-Ibrahim et al, "Beyond free riding: quality of indicators for assessing participation

64

in information sharing for threat intelligence," arXiv Preprint arXiv:1702.00552,

2017.

[74] Protiviti, "Cyber Threat Intel: A State of Mind", 2016. Available:

https://chapters.theiia.org/chicago/Events/Presentations%20from%203rd%20Annual%

20IIAISACA%202%20Day%20Hacki/Day%201/7%20-%20Protiviti.pdf.

[75] Detect-Respond. "The Pyramid of Pain", 2013. Available:

http://detect-respond.blogspot.com/2013/03/the-pyramid-of-pain.html.

[76] de Melo e Silva, Alessandra et al, "A Methodology to Evaluate Standards and Platforms

within Cyber Threat Intelligence," Future Internet, vol. 12, (6), pp. 108, 2020.

[77] Ryan Stillions. "The DML model", 2014. Available:

http://ryanstillions.blogspot.com/2014/04/the-dml-model_21.html.

[78] AFKC, "Cyber Threat Intelligence - What it is and how to use it effectively,"

2018.

[79] G. R. Taylor, Integrating Quantitative and Qualitative Methods in Research. 2005.

[80] Paul D.. Leedy, J. E. Ormrod and L. R. Johnson, Practical Research: Planning and

Design. 2014.

[81] V. Dickson-Swift et al, "Doing sensitive research: what challenges do qualitative

researchers face?" Qualitative Research, vol. 7, (3), pp. 327-353, 2007.

[82] N. K. Denzin and Y. S. Lincoln, "Transforming qualitative research methods: Is it a

revolution?" Journal of Contemporary Ethnography, vol. 24, (3), pp. 349-358, 1995.

[83] H. S. Becker, "Generalizing from case studies," Qualitative Inquiry in Education: The

Continuing Debate, vol. 233, pp. 242, 1990.

[84] A. Hevner and S. Chatterjee, "Design science research in information systems," in

Design Research in Information SystemsAnonymous 2010.

[85] A. Elragal and M. Haddara, "Design science research: Evaluation in the lens of big data

analytics," Systems, vol. 7, (2), pp. 27, 2019.

[86] K. Peffers et al, "A design science research methodology for information systems

research," J. Manage. Inf. Syst., vol. 24, (3), pp. 45-77, 2007.

[87] G. L. Geerts, "A design science research methodology and its application to accounting

information systems research," International Journal of Accounting Information

Systems, vol. 12, (2), pp. 142-151, 2011.

[88] C. Sonnenberg and J. Vom Brocke, "Evaluations in the science of the artificial–

reconsidering the build-evaluate pattern in design science research," in International

Conference on Design Science Research in Information Systems, 2012.

65

[89] J. Venable, J. Pries-Heje and R. Baskerville, "A comprehensive framework for

evaluation in design science research," in International Conference on Design Science

Research in Information Systems, 2012.

[90] K. Peffers et al, "Design Science Research Process: A Model for Producing and

Presenting Information Systems Research," arXiv Preprint arXiv:2006.02763, 2020.

[91] V. Braun and V. Clarke, "Using thematic analysis in psychology," Qualitative Research

in Psychology, vol. 3, (2), pp. 77-101, 2006.

[92] A. Castleberry and A. Nolen, "Thematic analysis of qualitative research data: Is it as easy

as it sounds?" Currents in Pharmacy Teaching and Learning, vol. 10, (6), pp. 807-

815, 2018.

[93] V. Mavroeidis and S. Bromander, "Cyber threat intelligence model: An evaluation of

taxonomies, sharing standards, and ontologies within cyber threat intelligence," in

2017 European Intelligence and Security Informatics Conference (EISIC), 2017.

[94] W. Agrell, "When Everything is Intelligence-Nothing is Intelligence,", Central

Intelligence Agency Washington DC. 2002. Available at:

https://apps.dtic.mil/sti/pdfs/ADA526584.pdf

[95] S. R. Chabinsky, "Cybersecurity strategy: A primer for policy makers and those on the

front line," J.Nat'L Sec.L.& Pol'Y, vol. 4, pp. 27, 2010.

[96] J. Pastor-Galindo et al, "The not yet exploited goldmine of OSINT: Opportunities, open

challenges and future trends," IEEE Access, vol. 8, pp. 10282-10304, 2020.

[97] H. L. Larsen et al, Using Open Data to Detect Organized Crime Threats: Factors

Driving Future Crime. 2017.

[98] D. Quick and K. R. Choo, "Digital forensic intelligence: Data subsets and Open Source

Intelligence (DFINT OSINT): A timely and cohesive mix," Future Generation

Comput. Syst., vol. 78, pp. 558-567, 2018.

[99] P. Nespoli et al, "Optimal countermeasures selection against cyber attacks: A

comprehensive survey on reaction frameworks," IEEE Communications Surveys &

Tutorials, vol. 20, (2), pp. 1361-1396, 2017.

[100] Recorded Future. “Best Practices for Applying Threat Intelligence”, 2020. Available:

https://go.recordedfuture.com/hubfs/white-papers/applying-threat-intelligence.pdf.

[101] A. Rai, "Editor's comments: Diversity of design science research," MIS Quarterly, vol.

41, (1), pp. iii-xviii, 2017.

[102] O. Steiger, "Ex-ante and ex-post," Eatwell, J, pp. 199-201, 1987.

[103] Recorded Future, "Threat Intelligence: Difference Between Platforms and Providers",

66

2017. Available: https://www.recordedfuture.com/threat-intelligence-platform/.

[105] P. Tan, M. Steinbach and V. Kumar, "Introduction to data mining, Pearson education,"

Inc., New Delhi, 2006.