Finding Appropriate Learning Objects: An Empirical Evaluation

12
Finding Appropriate Learning Objects: An Empirical Evaluation Jehad Najjar, Joris Klerkx, Riina Vuorikari and Erik Duval Computer Science Department, K.U.Leuven B-3001 Leuven, Belgium {najjar, jorisk, Riina, erikd}@cs.kuleuven.ac.be Abstract: The challenge of finding appropriate learning objects is one of the bottlenecks for end users in Learning Object Repositories (LORs). This paper investigates usability problems of search tools for learning objects. We present findings and recommendations of an iterative usability study conducted to examine the usability of a search tool used to find learning objects in ARIADNE Knowledge Pool System [1]. Findings and recommendations of this study are generalized to other similar search tools. 1 Introduction Much of current research on content reuse for learning focuses on the notion of learning objects [9]. Learning objects are stored in Learning Object Repositories (LORs) (such as ARIADNE [1], EdNa [10], SMETE [24], or Merlot [16]) to be used for different kinds of educational purposes and by different end users. One of the bottlenecks for end users is the challenge of finding appropriate learning objects in the current LORs [9]. There are several reasons why users cannot easily find their appropriate objects, among which that the current search tools are not user friendly and require users to provide too much information in order to locate relevant objects. In Ariadne [1], over the last decade we have been researching concepts, techniques and tools to facilitate finding appropriate learning objects. More and more, we are carrying out empirical studies to evaluate tools used to search or index learning objects. Empirical evaluation has been used to evaluate and improve the use and usability of different tools in different contexts, such as internet browsing [5][21] and digital libraries [11]. The context of learning objects lacks such important studies [9]. In our earlier works [17][18] we investigated the behavior of learning object indexers and searchers by analyzing the usage logs. We further applied a usability evaluation [19] to determine and improve the usability of indexation tools used by Ariadne users. In this paper, we introduce a user study conducted to evaluate the usability of search tools used in LORs. More specifically, we want to determine the usability of a search tool used by Ariadne users to locate relevant objects. This evaluation will help us to determine to what extent search tools enable users to reach their goals effectively and efficiently [2] [15]. In addition to this, it will help us to determine

Transcript of Finding Appropriate Learning Objects: An Empirical Evaluation

Finding Appropriate Learning Objects: An Empirical Evaluation

Jehad Najjar, Joris Klerkx, Riina Vuorikari and Erik Duval

Computer Science Department, K.U.Leuven B-3001 Leuven, Belgium

{najjar, jorisk, Riina, erikd}@cs.kuleuven.ac.be

Abstract: The challenge of finding appropriate learning objects is one of the bottlenecks for end users in Learning Object Repositories (LORs). This paper investigates usability problems of search tools for learning objects. We present findings and recommendations of an iterative usability study conducted to examine the usability of a search tool used to find learning objects in ARIADNE Knowledge Pool System [ 1]. Findings and recommendations of this study are generalized to other similar search tools.

1 Introduction

Much of current research on content reuse for learning focuses on the notion of learning objects [ 9]. Learning objects are stored in Learning Object Repositories (LORs) (such as ARIADNE [ 1], EdNa [ 10], SMETE [ 24], or Merlot [ 16]) to be used for different kinds of educational purposes and by different end users. One of the bottlenecks for end users is the challenge of finding appropriate learning objects in the current LORs [ 9]. There are several reasons why users cannot easily find their appropriate objects, among which that the current search tools are not user friendly and require users to provide too much information in order to locate relevant objects.

In Ariadne [ 1], over the last decade we have been researching concepts, techniques and tools to facilitate finding appropriate learning objects. More and more, we are carrying out empirical studies to evaluate tools used to search or index learning objects.

Empirical evaluation has been used to evaluate and improve the use and usability of different tools in different contexts, such as internet browsing [ 5][ 21] and digital libraries [ 11]. The context of learning objects lacks such important studies [ 9]. In our earlier works [ 17][ 18] we investigated the behavior of learning object indexers and searchers by analyzing the usage logs. We further applied a usability evaluation [ 19] to determine and improve the usability of indexation tools used by Ariadne users.

In this paper, we introduce a user study conducted to evaluate the usability of search tools used in LORs. More specifically, we want to determine the usability of a search tool used by Ariadne users to locate relevant objects. This evaluation will help us to determine to what extent search tools enable users to reach their goals effectively and efficiently [ 2] [ 15]. In addition to this, it will help us to determine

users’ satisfaction on the overall use of the evaluated tools. Findings of this study are generalized for similar tools used in other LORs.

The paper is structured as follows: in section 2, a background is provided. In section 3, we discuss the research objectives, methods and materials. In sections 4, we provide findings and recommendations of the study. Discussion is illustrated in section 5. Conclusions and future directions are provided in section 6.

2 Background

The basic mission of ARIADNE [ 1] is to enable better quality learning through the development of learning objects, tools and methodologies that enable a “share and reuse” approach for education and training. A search and indexation tool (SILO) was developed to facilitate store new objects and search for relevant objects in the Knowledge Pool System (KPS). This KPS is a Learning Object Repository (LOR), like EdNa [ 10], SMETE [ 23 24], and Merlot [ 16]. Ariadne users use SILO tool (see figure 1) to introduce new learning objects or search for relevant objects in the Ariadne KPS and other collaborated repositories as well. The SILO indexation functionalities are not relevant for this paper as they are discussed in [ 20][ 19].

Figure 1: Screenshot of SILO Indexation client

Each repository aims at providing their users (teachers, students, content developers) with the learning objects they need to reuse for their different courses. This requires repositories to: • Bear a vast collection of learning objects. As manual indexing (uploading the

objects and adding descriptive metadata) of new objects is labor-intensive, searching other repositories using federated search (query objects in a set of collaborated repositories from within the same search tool) may lead to gathering a great set of appropriate learning objects to users. Nowadays, the five repositories

Ariadne, Merlot, EdNa, and SMETE allow their users to access learning object in the different repositories. Furthermore, semi-automatic or automatic indexation of learning objects can significantly increase the number of learning objects.

• Provide tools and different functions that allow finding appropriate learning objects. Different repositories use different functions (simple/advanced search and browse) and criterions to locate learning objects. Repositories may rely on different metadata sets (profiles) to facilitate searching

relevant learning objects, for example, Ariadne and Merlot uses a LOM [ 13] application profile while EdNa uses Dublin Core [ 8] profile. In this study, we want to evaluate the usability of the search tools used in those repositories.

3 Description of Research

In this study, we aim to evaluate and improve the usability of a search tool (SILO) used by Ariadne users to search learning objects in the KPS. We want empirically to answer the following questions: • How do users use the search tool? • How effectively and efficiently does the search tool help users perform their search

tasks? • What are the factors that may increase the performance of finding learning objects? • Are users satisfied with the overall use of the tool?

In order to collect primary data on the usability of SILO and double check and validate our findings we conducted two iterative usability phases: • First phase:

We collected primary data from extended sessions with 16 participants to determine the initial usability of the tool.

• Second Phase: Here, we evaluated the tool after solving the usability problems and integrating the recommendations that appeared in the first phase. In this phase, we collected data from extended sessions with 10 new participants who have no prior experienced with the tool.

The twenty six participants volunteered to be representative of the intended user community cover a range of disciplines including Arts, Computer Science, Electrical Engineering, Chemistry, Physics, Archeology, Architecture and Medicine.

We started each test session by introducing participants to the process. We also explained to them how much their feedback is important for us to improve the usability of the search tool. No instructions were given for SILO, this is because many users who may search learning objects probably are inexperienced with such tools.

During each test session, a participant was asked to use SILO to perform a number of pre-selected task scenarios (an example is provided in table 1) that are related to participants work context. Participants were asked to think aloud while carrying them out. At the end of each session of both phases, participants were asked to fill in a feedback questionnaire on the overall use of SILO. An experimenter was present in the test room throughout the session to provide any assistance solicited and to observe

participants behavior. In addition, screen recording was applied to observe user interaction during test sessions.

Table 1: Excerpt from the tasks assigned to participants in the usability test Task1 Search for learning material on the topic “Business Process”. Task2 Try to locate learning materials of Dutch language on the topic “Process Re-engineering”. Task3 Find a questionnaire on the topic process re-engineering. Task4 Find all learning objects that published by Aebi Marqaux. Task .. ……

Screen recordings and paper notes were compiled, analyzed and findings were sent to the development teams in both phases for improvements.

In the next section, we discuss findings and recommendations of both usability phases.

4 Usability Evaluation

In section 4.1, we present the first usability phase conducted with sixteen participants to collect primary data on SILO usability. In section 4.2, we present the second usability phase applied to measure the usability of SILO after solving usability problems obtained from the first phase.

4.1 First Phase Findings and Recommendations 1) Efficient simple search function is important to elevate users’ motivation and

trust The simple search function is provided as the default service at the user interface. It works by matching terms provided by users and the stored metadata of data elements. All participants started their sessions (first task) of searching appropriate learning objects using the simple search function. Surprising, most participants were not able to find their appropriate objects through the simple search, although, they were asked to search for objects that are available in the KPS. We believe that participants were not able to find their appropriate objects because of the following reasons: • Matching terms provided in the simple search box with stored metadata of the two

elements “title” and “author” (elements used in the simple search function) is narrowing the possibility of finding appropriate objects.

• Simple search uses an exact match approach, which is also not an efficient function in the context of finding objects; users rarely provide exact keywords for their searches. The simple search function of the evaluated search tool and other tools used in

other repositories can be more efficient and smarter if we address the following issues: • Simple search functions should search in all metadata elements that contain rich

information about learning objects. The current search tools (in Ariadne and other repositories) search three to six metadata elements. That might be related to some technical issues, like string comparison load. This issue can be enhanced using the

power of database indexing techniques [ 7], which allow instantly search of many more metadata elements. We may also search elements that are mostly used by users when index or search for material [ 18]. For example, all simple search functions should search in values provided for elements like “main concept”, “main discipline”, “document format” (diagram, slides, simulation or video) and “description”, which are mostly used by users in different context like LORs [ 18], digital libraries [ 11].

• Simple search should not only be based on exact mach between values provided for metadata elements and the terms entered by users. Unfortunately, the metadata information stored for each LO and terms provided for user queries often have many morphological variants. That leads to considering for example the two keywords “ontology" and "ontologies" as non equivalent terms. While, in the context of information retrieval both terms are equivalent. However, using some form of natural language processing (NLP), like stemming algorithms [ 23] may increase the performance of the simple search function. The techniques mentioned above can noticeably enhance the usefulness of the

simple search service. 2) Organization and structure of information should be adapted to users’ needs Advanced search function uses a set of metadata elements to enable the search of learning objects. Those elements are grouped into five relevant panels (see figure 1), this approach of relevance grouping (semantic, technical, pedagogical, etc.) was proposed to be usable by the end users [ 20]. For users to perform a successful search query, they need to fill-in or select values for elements that form their search.

Most participants (88%) found that the current advanced search is overwhelming and they preferred to have the advanced search elements at one panel.

One participant replied, “Why not to have the ‘main concept’ element in the first panel instead of the second, it should be some where else”.

Moreover, two third of the participants reported that the advanced search interface has too many metadata elements, they wanted to have very few fields to be presented in the advanced search. For example, they think that technical data elements like file name, operating system, indexation identifier are technical and should be hidden from users. That is a trade-off issue; other set of users (technical users) may prefer to be presented with the technical elements.

In addition, about two third of the participants, found that the current interface of advanced search makes some options unreachable. For example, one of the tasks was to locate a “UML diagram” that explains the concept “inheritance”. Most participants were not able to finish that task successfully, because participants were not able to select the value “diagram” from the “document format” element. Selecting that value means that searchers want only to locate diagrams not a video neither a narrative text.

Based on the observations and feedback questionnaire (see table 2), we found that the organization of the current advanced search does not match users’ needs. The metadata elements form on the advanced search should be decreased and re-organized in one panel. In addition, it should be adapted to the users’ needs and not to the used metadata standard (LOM). That can be achieved by presenting users with 2 to 4 elements that have rich metadata and mostly used for searching. Those most used elements can be different from one community of users to another.

3) Used terminology should be understandable by users More than 60% of participants found that the terminology used for metadata elements and vocabularies at the advanced search function are confusing. That is due to the fact that the labels of elements as well as vocabularies at advanced search are adapted to the labels of the metadata standard and not to users’ context and understanding. For example, the labels of the vocabularies (active and expositive) of the element “document type” were un-understandable for most users (the term active means learning objects that includes interaction between the end user and the object, like questioners or problem statements. While the term expositive means objects like an audio file or a diagram, which have no interaction with the user). Also, labels for the elements “semantic density” (the amount of information conveyed by the learning object), “header author” (author of metadata and not the learning objects itself) were not understood by most participants.

Questionnaire feedback (see table 2) and participants observation shows that the use of terminology in the search interface should be improved. This can be achieved by presenting those elements and vocabularies in a terminology that is understandable by the users, not based on the labels provided by the standard. Giving those elements and vocabularies labels that are understandable by users and different from how they are represented in the metadata standard, does not mean that the LOR metadata are not relying on the standard. It would be recommended that labels of the data-elements and its associated vocabularies be adapted to better align with a more familiar terminology used by users and then mapped to the appropriate elements in the standard at the technical level. 4) Help and feedback should be improved to improve users’ performance About half of the participants found that SILO should provide users with more purposely help and feedback whenever needed. Participants asked for help hints for the meaning of some terminologies. They found that more feedback messages should be provided to users when perform a wrong action, when the object is not found or restricted (e.g., copyright, author permission, etc.).

Based on participants observation and feedback questionnaires (see table 2) we believe that users should be provided with help messages and hints that guide them to improve their search performance. That can be achieved by providing users with some simple hints for the meaning of data elements or vocabularies wherever needed. In addition to that, guidance help messages that may guide users find their objects should be provided. For example, we may present users with the following message when the learning object is not for free of use: “in order to use this learning object you should contact the author of the object”, also, we should provide that user with the contact information of that author in question. That kind of guidance messages can greatly enhance the overall user satisfaction. 5) The ability to refine search terms enhances search performance The majority of participants disliked the fact that SILO does not provide the facility to reformulate previous queries. The search interface should enable the users to reformulate their previous queries. Especially, in cases when users want to change or remove one or two letters to submit the new query or to narrow down the search within the given results (like in Google).

Overall Satisfaction Table 2 presents the participants response to questions concerning the overall use of SILO search tool. The popular attitude scale with seven points (ranging from 1- poor to 7- good) was used to measure the response of participants on the overall use of SILO.

Table 2: User satisfaction on the overall evaluation

1. E

ase

of u

se

2. In

form

atio

n or

gani

zatio

n

3. U

se o

f te

rmin

olog

y

4. Q

ualit

y of

fe

edba

ck

5. N

avig

atio

n

6. S

earc

h an

d do

wnl

oad

7. R

esul

ts L

ist

Ove

rall

Mea

n

Mean 3.6 3.7 3.3 3.8 5 4.8 5.1 4.2 St. Div. 1.3 1.5 1.4 1.9 1.5 1.9 1.4

The mean for the level of ease-of-use was less than 4, which means that the participants found SILO rather difficult to use. The level of information organization, use of terminology and quality of feedback was perceived as low (mean 3.7, 3.3 and 3.8 respectively). We believe that this is related to the fact that the organization and naming of metadata elements and vocabularies are more adapted to the used metadata standard (LOM) and not to the user needs. Navigation between the panels and the readability of the results list were seen as acceptable (means 5 and 5.1).

As can be seen from the table above, the level for the overall use of SILO was perceived as moderate (mean 4.2). We think that SILO usability should be improved, in order to increase the performance of finding appropriate learning objects.

In the next section, we present findings of the second phase of the usability test aimed to evaluate SILO after integrating usability recommendation of the first phase.

4.2 Second Phase Findings The aim of this phase is to determine the usability of SILO after removing the sophistications and usability problems that decreased participants’ performance in the first phase. Also, to discover new findings and usability problems that we could not find in the first phase.

Ten new participants (inexperienced with the tool) were asked to test the tool after fixing the usability problems that appeared in the first phase. The aim here is to measure SILO usability after solving some of the identified problems. The following improvements were integrated in the new SILO: • The current simple search now queries data in more metadata elements (author,

title, and concept) to increase the number of retrieved learning objects. The main concept element mostly contains the main key words on the learning object.

• The advanced search function presents users with only four fields of metadata elements (Title, Author, Usage rights and Concept). Users can also extend their queries with more fields if needed, by clicking on a button captioned with “more” (see figure 2).

• Users can refine their search terms provided at the advanced search functions.

• A federated search function was added to allow users search for appropriate objects in more LORs like Merlot, SMETE and EdNa.

• More purposely help notes and feedback messages were provided to users whenever needed.

Figure 2: Screenshot of the new SILO

Table 3 presents the participants’ response to questionnaire questions on the overall use of SILO after solving the usability problems obtained from the first evaluation phase.

Table 3: User satisfaction on SILO in the second phase

1. E

ase

of u

se

2. In

form

atio

n or

gani

zatio

n

3. U

se o

f te

rmin

olog

y

4. Q

ualit

y of

fe

edba

ck

5. N

avig

atio

n

6. S

earc

h an

d do

wnl

oad

7. R

esul

ts L

ist

Ove

rall

Mea

n

Mean (second phase) 5.5 5 5 5.1 6.3 6.3 6.1 5.6 Mean (first phase) 3.6 3.7 3.3 3.8 5 4.8 5.1 4.2

The mean for level of ease-of-use was noticeably increased to a good level (from 3.6 to 5.5). That means participants found SILO rather easier to use after solving the major usability problem obtained from the first phase. The organization of information, use of terminology, quality of feedback were also noticeably increased to a good level. Navigation, readability of the results list and download of objects were perceived as high. In general, the level of the overall use of SILO was perceived as good.

We drew a comparison between the overall use of SILO before and after identifying the usability problems. Five participants (group 2) who participated in phase two were asked to use the old interface of SILO. Moreover, we asked another five participants (group 1) who participated in the first phase to evaluate SILO interface of the second phase (SILO 2). That is to revalidate the recommendations and usability problems obtained from the first evaluation phase and to draw some

comparisons between the two interfaces. Based on participants’ feedback, we found that SILO 2 was much easier to use and much less overwhelming than SILO for both of the groups (see figure 3).

Figure 3: Comparison between responses of participants who evaluated SILO 2 then

SILO (group 2), and participants who evaluated SILO then SILO 2 (group 1)

As shown in figure 3, in both groups 1 and 2 the overall usability of SILO 2 is higher than SILO. In addition, the level of user satisfaction on all the studied factors (ease of use, use of terminology, information organization, etc.) of SILO 2 is higher in both groups.

5 Discussion:

As discussed in the previous section, finding appropriate learning objects is still not an easy task. The usability of the search interface may noticeably decrease the performance of users searching for relevant material. The use of terminology and structure of information in the old SILO was more adapted to the metadata standard than to the user needs. That practice of metadata use can be noticed in the search interface of other existing repositories such as Merlot and SMETE (see figures 4 and 5). Duval, E. [ 9] called that practice a metadata myth that should be killed. The labels used for metadata elements and the way they are structured in the standard are intended to provide guidance to tool developers and need to be replaced by words meaningful to the end users and should be structured/presented according to the needs of the local community.

The main goal of a search tool is to facilitate the finding of appropriate learning objects. We believe that this goal is a great challenge facing the different LORs. As illustrated in the previous sections form-based search tools are not easy to use and most of their components (metadata elements and the associated vocabularies) are not familiar and may not be used by most users. Users tend to use few terms to look for appropriate materials. In order to provide the user with a more usable and interactive methods for finding appropriate objects, the following issues should be addressed: • Simple search functions should be smart enough, and should compare the

keywords provided in the text box with as many metadata elements as possible. In particular, elements that are often used by indexers when introducing learning objects to such repositories. The existing simple search functions of all mentioned LORs does not match the terms provided by the user in the search box with the values provided for the “document format” element. Therefore, when users provide

the keywords “inheritance diagram”, they will not be able to locate the diagram object that were indexed with the title “inheritance”.

• Advanced search interfaces should not contain too many metadata elements. Merlot advanced search presents users with 16 elements and SMETE presents 8 elements (see figures 4 and 5). Users should only be presented with two or maximum four most used elements. Advanced search interfaces that require the user to navigate between different menus and provide values for elements placed in long lists, decrease users’ search performance and do not guarantee finding the appropriate objects. Metadata and vocabulary labels should be understandable by users, some help notes should be provided for unobvious metadata elements. As shown in figure 5, SMETE advanced search provide help notes for all data elements. We recommend to provide help notes for unobvious elements only.

• Ranking mechanism: results ranking has a major impact on users' satisfaction with search engines and their success in retrieving relevant documents [ 6]. The major existing repositories sort the retrieved results based on the author, title or metadata creation date of the objects. They do not rank retrieved objects according to their relevance to users search queries. Retrieved results list should place objects that best match the search query at the top ten of the results list. Such ranking mechanisms for LORs search tools can enhance users’ performance in retrieving relevant objects.

• More novel access techniques such as information visualization [ 14] and recommender systems should be considered to improve finding appropriate objects.

Figure 4: Screenshot of the search interface of Merlot

Metadata elem

ents

Figure 5: Screenshot of the search interface of SMETE

6 Conclusions

In 1986, Borgman discovered that Online Public Access Catalogs (OPAC's) are difficult to use, because their design does not match searching behavior [ 3]. In 1996, in a follow-up study, Borgman concluded that OPAC's were still difficult to use [ 4]. In this paper, we conclude that, at least for Learning Object Repositories, the situation has not improved substantially. (We would argue that the same observation holds for many digital libraries, but that was not the focus of the study reported here.)

More specifically, we conclude that the Ariadne search tool is hard to use, because its interface reflects the metadata standard rather than the characteristics, aims and requirements of the end user. In the paper, we make specific recommendations to improve the usability of the Ariadne search tool and we generalize our recommendations for other learning object repositories.

More generally, we once more want to emphasize (as others have done before us) that the outcomes of such user studies are vital to improve the design of search tools that can better serve the needs of end users.

Metadata elements

References

1. ARIADNE. http://www.araidne-eu.org 2. Blandford, A., Suzette, K., Connell, I., Edwards, H., Analytical usability

evaluation for digital libraries: a case study, ACM/IEEE Joint conference on Digital libraries, 2004, pp. 27-36.

3. Borgman, C. L. Why are online catalogs hard to use? Lessons learned from information retrieval studies. Journal of the American Society for information Science, 37( 6). 387-400, 1986.

4. Borgman, C. L.. Why are online catalogs still hard to use? Journal of the American Society for information Science, 47(7), 493-503, 1996.

5. Cockburn, A., McKenzie, B., What do web users do? An empirical Analysis of Web Use, Int’l. Journal of Human-Computer Studies, 54(6):903-922. 2001.

6. Courtois, M. P., & Berry, M. W. Results-ranking in Web search engines. Online, 1999, 23(3), 39-40.

7. dtSearch, How to index databases with the dtSearch Engine. http://support.dtsearch.com/faq/dts0111.htm.

8. Dublin Core, Dublin Core Metadata Element Set v1.1. http://www.dublincore.org 9. Duval, E., Hodgins, W., A LOM research agenda. International conference on

World Wide Web, 2004. 10. EdNa. http://www.edna.edu.au/ 11. France, R. K., Nowell, T. L., Fox, A. E., Saad, A. R., Zhao, J., Use and usability

in a digital library search system, CoRR cs.DL/9902013, 1999. 12. Jones, S., Cunningham, S. J., McNab, R., An Analysis of Usage of a Digital

Library, ECDL 1998, pp 261-277. 13. IEEE Standard for Learning Object Metadata. http://ltsc.ieee.org/doc/wg12/. 14. Klerkx, J., Duval, E., Meire, M., Using information visualization for accessing

learning object repositories, Information Visualization, 2004 (IV04), pp. 465-470. 15. Marchionini, G., Plaisant, C., Komlodi, A., The people in digital libraries:

Multifaceted approaches to assessing needs and impact. In: A. Bishop, B. Buttenfield, & N. VanHouse (Eds.) Digital Library Use: Social Practice in Design and Evaluation. MIT Press. November 2003. pp. 119-160.

16. MERLOT. http://www.merlot.org/. 17. Najjar, J., Ternier, S., Duval, E., the Actual Use of Metadata in ARIADNE: An

Empirical Analysis, ARIADNE 3rd Conférence, 2003, pp. 1-6. 18. Najjar, J., Ternier, S., Duval, E. User Behavior in Learning Objects Repositories:

An Empirical Analysis. EdMedia, 2004, pp. 4373-4378. 19. Najjar, J., Klerkx, J., Ternier, S., Verbert, K., Meire, M., Duval, E., Usability

Evaluation of Learning Object Indexation: the ARIADNE Experience, European Conference on e-Learning, 2004, pp. 281-290.

20. Neven, F., Duval, E., Ternier, S., Cardinaels, K., Vandepitte, P. An Open and Flexible Indexation and Query tool for ARIADNE, EdMedia 2003, pp. 107-114.

21. Nielsen, J., When Search Engines Become Answer Engines, usit.com, 2004. 22. O'Neill, C., Paice, C. D., the lancaster stemming algorithm.

http://www.comp.lancs.ac.uk/computing/research/stemming/. 23. Porter, M., The Porter stemming algorithm. http://www.hackdiary.com/. 24. SMETE. http://www.smete.org/.