EVALUATION OF THE DIGITAL DOORWAY INITIATIVE

10
EVALUATION OF THE DIGITAL DOORWAY INITIATIVE M. A. Marais a , R. Smith a , M. Rampa, a U du Buisson b , A. Choles c and S. Blignaut d a Meraka Institute, CSIR, Pretoria, South Africa b Machule Business Consultants, Cape Town, South Africa. c The Narrative Lab, Benoni, South Africa d More Beyond (Pty) Ltd, Pretoria, South Africa Abstract The Digital Doorway (DD) project is funded by the Department of Science and Technology (DST) and the Department of Rural Development and Land Reform. It is implemented by the CSIR Meraka institute. The Digital Doorway’s vision is to make a fundamental difference to computer literacy and associated skills in South Africa. Underpinning the DD project is the concept of people’s inherent cognitive ability to acquire functional computer skills through their own intuition and exploration without formal training and with minimal external intervention. The computers are installed such that they are easily accessible to potential users in an environment conducive to experimentation. To date more than 230 Digital Doorways have been installed nationally and 36 internationally. The DST commissioned an evaluation to determine whether the DD project has achieved this aim of supporting computer literacy. The evaluation questions as they linked to the overall evaluation design, evaluation methods and data sources are discussed. A particularly interesting component of the evaluation design is how a novel combination of Outcome Mapping (OM) and narrative enquiry (NE) was used to evaluate the systemic influence of DDs in contributing to basic computer literacy through the “minimally invasive education” strategy. NE is typically qualitative in nature, but the evaluation utilized SenseMaker®, a software tool, developed by Cognitive Edge, which was designed to gather and enable sense-making of large amounts of narrative material. It strives to bridge the gap between qualitative and quantitative research and assessment methods by combining the richness of narrative (qualitative) with the scalability of numbers (quantitative). The benefits of the combined evaluation approach was that OM provided the backbone of evaluation and provided guiding questions for the analysis used in the NE. Sense making and reflection processes were used to produce a synthesis using the entire dataset. 1. Introduction The Digital Doorway (DD) is a joint initiative between the DST and the Meraka Institute of the CSIR, with a vision of making a fundamental difference to computer literacy and associated skills in Africa. Underpinning the project is the idea of people’s inherent cognitive ability to teach themselves computer skills with minimal external intervention (Mitra, Dangwal, Chatterjee & Jha, 2005; Mitra, 2000, 2003). The DD project has been operational for ten years and at the time of the evaluation (April 2011 to March 2012) 210 units had been deployed throughout SA in mostly deep rural settlements (Smith, Cambridge, & Gush, 2006; Cambridge, Smith & Gush, 2008, Cambridge, 2008; Herselman et al., 2010; Stillman et al., 2011). It is important to determine how this technology has affected the lives of the communities where it has been deployed and what lessons were learnt by the project team in order to provide a clear picture of the value of this type of intervention. The evaluation project was undertaken in order to determine whether the DD has achieved this aim of computer literacy and if so, how it supports it, and whether it has evolved since its initial inception (Marais et al., 2012). The goal was an outcome evaluation for the Digital Doorway project through the development, testing and implementation of an evaluation methodology that focuses not only on the direct beneficiaries, but also on other key role players, such as teachers, and especially on the learning that has taken place among these key role players. A combination of approaches was used: Outcome Mapping (OM), Narrative Enquiry (NE) and Quantitative analysis. An evaluation framework that combines OM and NE was developed and the

Transcript of EVALUATION OF THE DIGITAL DOORWAY INITIATIVE

EVALUATION OF THE DIGITAL DOORWAY INITIATIVE

M. A. Maraisa, R. Smitha, M. Rampa,a U du Buissonb, A. Cholesc and S. Blignautd

aMeraka Institute, CSIR, Pretoria, South Africa b Machule Business Consultants, Cape Town, South Africa. cThe Narrative Lab, Benoni, South Africa d More Beyond (Pty) Ltd, Pretoria, South Africa Abstract The Digital Doorway (DD) project is funded by the Department of Science and Technology (DST) and the Department of Rural Development and Land Reform. It is implemented by the CSIR Meraka institute. The Digital Doorway’s vision is to make a fundamental difference to computer literacy and associated skills in South Africa. Underpinning the DD project is the concept of people’s inherent cognitive ability to acquire functional computer skills through their own intuition and exploration without formal training and with minimal external intervention. The computers are installed such that they are easily accessible to potential users in an environment conducive to experimentation. To date more than 230 Digital Doorways have been installed nationally and 36 internationally. The DST commissioned an evaluation to determine whether the DD project has achieved this aim of supporting computer literacy. The evaluation questions as they linked to the overall evaluation design, evaluation methods and data sources are discussed. A particularly interesting component of the evaluation design is how a novel combination of Outcome Mapping (OM) and narrative enquiry (NE) was used to evaluate the systemic influence of DDs in contributing to basic computer literacy through the “minimally invasive education” strategy. NE is typically qualitative in nature, but the evaluation utilized SenseMaker®, a software tool, developed by Cognitive Edge, which was designed to gather and enable sense-making of large amounts of narrative material. It strives to bridge the gap between qualitative and quantitative research and assessment methods by combining the richness of narrative (qualitative) with the scalability of numbers (quantitative). The benefits of the combined evaluation approach was that OM provided the backbone of evaluation and provided guiding questions for the analysis used in the NE. Sense making and reflection processes were used to produce a synthesis using the entire dataset. 1. Introduction The Digital Doorway (DD) is a joint initiative between the DST and the Meraka Institute of the CSIR, with a vision of making a fundamental difference to computer literacy and associated skills in Africa. Underpinning the project is the idea of people’s inherent cognitive ability to teach themselves computer skills with minimal external intervention (Mitra, Dangwal, Chatterjee & Jha, 2005; Mitra, 2000, 2003). The DD project has been operational for ten years and at the time of the evaluation (April 2011 to March 2012) 210 units had been deployed throughout SA in mostly deep rural settlements (Smith, Cambridge, & Gush, 2006; Cambridge, Smith & Gush, 2008, Cambridge, 2008; Herselman et al., 2010; Stillman et al., 2011). It is important to determine how this technology has affected the lives of the communities where it has been deployed and what lessons were learnt by the project team in order to provide a clear picture of the value of this type of intervention. The evaluation project was undertaken in order to determine whether the DD has achieved this aim of computer literacy and if so, how it supports it, and whether it has evolved since its initial inception (Marais et al., 2012). The goal was an outcome evaluation for the Digital Doorway project through the development, testing and implementation of an evaluation methodology that focuses not only on the direct beneficiaries, but also on other key role players, such as teachers, and especially on the learning that has taken place among these key role players. A combination of approaches was used: Outcome Mapping (OM), Narrative Enquiry (NE) and Quantitative analysis. An evaluation framework that combines OM and NE was developed and the

evaluation was based on the evidence produced by the two methodologies, with OM as the backbone of the evaluation methodology. The quantitative analysis is not discussed in this paper. 2. Design of the evaluation framework Overview In order to address the social aspects of the introduction of ICTs to communities the evaluation was based upon the principles of the methodologies of Outcome Mapping (OM) and narrative-based methods. The strategy of using narrative evaluation techniques was employed to capture stories of impact as told by beneficiaries themselves. Sensemaker® was used in order to strike a balance between the richness and depth provided by qualitative assessment and the scale of quantitative methods. Narrative Hinchman and Hinchman (1997: xvi) define narratives as “discourses with a clear sequential order that connect events in a meaningful way for a definite audience, and thus offer insights about the world and/or people’s experiences of it.” Telling stories is a fundamental human activity, and ever since we’ve been able to communicate narrative has been a means by which we learn, represent ourselves to others and make sense of our lives. Narrative enquiry methods focus on collecting and analysing these stories in order to understand human experience (Case & Light, 2011). Polkinghorne (1995) outlines two modes in which narratives can be analysed: ”analysis of narrative” which proceeds in the scientific mode and attempts to identify common themes across a series of narratives, and ”narrative analysis” which analyses each narrative on its own terms. The more usual form of analysis he describes as ”paradigmatic” analysis, where the researcher attempts to identify common themes across the various narratives that have been collected as data. In this project, research analysts utilised a combination of approaches, where in the first instance attention was given to the layers of meaning attached to each story as defined by the storytellers through the accompanying narrative survey provided at story collection, and secondly the mode of finding common themes across a series of narratives, while also evaluating each piece of narrative in and of itself. SenseMaker®, a narrative collection and analysis tool, was used as the key system for gathering the narrative material required for evaluating outcomes. SenseMaker®, developed by Cognitive Edge (www.cognitive-edge.com) emerged out of a project with the Singapore Ministry of Defence and was designed to gather large amounts of narrative material that would inform decision-making. The underpinning philosophy behind the software is that stories told by people are the filter through which they make decisions and subsequently make sense of the world they live in. The aim of SenseMaker® is to allow researchers to analyse the way in which qualitative material is quantified, thus combining the scale and breadth of quantitative research with the depth and richness of qualitative methods. SenseMaker® achieves this first by gathering narrative material (qualitative), and, secondly asking those contributing the narrative to assign meta-data (signify) their stories using a set of questions that form an interpretative framework for assessment. This quantitative meta-data that is assigned by the story-teller to each story then allows the stories to be compared, assessed and interrogated for patterns. Outcome Mapping The choice of Outcome Mapping is motivated by several factors. OM, together with case studies and ex post cross-sectional surveys are some of the major methodologies suited to ex post outcome assessments that are rapid and economical (Rachel, 2006: 9). The philosophy of the DD as a vehicle for minimally invasive education, leads to a limited engagement by the project team with user and communities and a dependence on other parties to achieve outcomes. The very nature of the project is therefore aligned with the departure point of OM that external agents only facilitate change and that control of change is in the hands of the key role players that can be influenced directly by the project (called boundary partners, e.g. DD users, DD champions and the CSIR team) (Earl, Carden & Smutylo, 2001). OM focuses on outcomes as behavioural change and defines outcomes as “changes in the behaviour, relationships, activities, or actions of the people, groups, and organizations with whom a

program works directly” (ibid: 1). A series of desirable behavioural changes called progress markers is defined for each boundary partner. Other strengths of OM include the active engagement of the team involved in executing the project (rather than evaluation being done to them), the focus on learning as the primary outcome of evaluation, and the focus on the use of the findings. While OM is used primarily during the design stage of a programme, it can also be used during the programme execution, or, as in this case, as an “end-of-program assessment tool when the purpose of the evaluation is to study the program as a whole” (ibid: 11). Combining the two methods The purpose of combining of Outcome Mapping and Narrative Enquiry and monitoring was to facilitate the collection and presentation of qualitative and quantitative data in a way that is particularly suited for making sense of complex issues. The use of SenseMaker® allows the analysis of considerable numbers of narratives that allows a broad and rich base of qualitative responses from DD users that supplement the use of Outcome Mapping. OM focusses on behavioural change and the idea was that this combination of approaches should enable us to develop an understanding of the attitudes and other factors that drive behaviour when people interact with the Digital Doorways. 3. Developing a methodology The major work on the methodology was done during four workshops, one held in June 2011 during the planning phase and the other three in March 2012 during the data analysis phase. At the start of the project each team did preliminary planning based upon previous experience and during the June 2011 workshop these plans were used as inputs for the high-level planning in which the OM and Narrative activities were mapped to these four phases:

Design

Data collection

Sense making

Reflection The team came to the conclusion that the major interaction between the methodologies will occur during the Sense making phase. The March 2012 workshops, which were held in the middle of the Sense making phase, developed the methodology further by fleshing out how the Sense making phase should be concluded. The diagram below outlines the flow of the methodology.

Figure 1 Evaluation flow Design phase The overall evaluation design is composed out of the OM and Narrative evaluation designs. OM provided the strategic framework for the evaluation by eliciting and clarifying key aspects of the project via what is called an intentional design. The intentional design covers the strategic framework for the monitoring and evaluation of the DD, dealing with the vision, mission, boundary partners, objectives (framed as outcome challenges), progress markers (behavioural change), strategies and organisational practices. Progress markers are a set of graduated indicators of expected behavioural change in order to clarify the ladder of progression of boundary partners over a period of time and is used to gather evidence on the achievement of outcomes. Strategies outline the project's activities in support of each outcome aligned with progress Markers and outcome challenges of boundary partners. Particular attention was paid to populating the OM framework with content from the different sources in the various categories to be able to explore the potential links between planning, objectives, activities and outcomes to place the use of OM in the context of the evaluation chain. The Narrative Evaluation design, i.e. the design of the signification framework, was informed and guided by the Mission and Vision statements developed by OM and by the work done in the joint planning workshop. The survey instrument (the signification framework) was designed through a participative process involving the narrative research team and Meraka, over a number of workshops. The factors that influence the DD as well as factors around the DD, formed the foundation for the development of the survey instrument's narrative elicitation questions and signifiers. These factors, also called modulators, were developed keeping in mind what this research project aimed to answer, as well as what DST would be interested in knowing.

The survey instrument (also termed signifier instrument) was piloted and refined. It consisted of three sections, namely: Sticky questions - Sticky questions are called such, because they ‘stick’ to the respondent, and only have to be answered once irrespective of how many stories are shared. These are questions to gather basic demographic information about the respondent, e.g. gender, age group, home language, schooling. Respondents were also asked sticky questions about the DD, e.g. who they believe owns the DD and who causes the most problems at the DD. Elicitation questions - These questions are designed to elicit stories or experiences from respondents. They are open-ended questions and phrased so that the respondents are invited to share stories that are either positive or negative, for example:

As we're talking about the DD, what is the most important story you can share about the DD which has either happened to you or someone you know, or that you've heard about.

In this project, five elicitation questions were developed. The first question was compulsory for respondents to answer, whilst the remaining four were optional. Signifiers - Signifiers are questions about the story itself. They are intended to provide an additional layer of data for analysis over and above the narrative already shared. Signifiers are completed for each story shared, and are used to search for patterns across stories. Three formats of questions were used: Multiple choice questions, polarity questions and triad questions. Polarity questions use slider scales, and are questions with a line in the middle, and two opposing answers on either side of it. Respondents may choose their response anywhere between the two options, depending on which statement they agree with more. A triad is used when there are three possible answers to the question and is designed to assess the interconnectedness between three variables. It is in the shape of a triangle, and respondents can choose their response anywhere within the triangle depending on which option they agree with most. Data collection phase The selection of the DD sites was a joint effort. The 14 sites selected via purposeful sampling were first visited by the OM team and OM interviews were done with DD users and DD champions (mainly teachers). At these sites 202 interviews were done with users at 23 entities such as schools, libraries and multi-purpose community centres. On the basis of the OM team’s recommendations seven sites for Narrative interviews were selected and 1327 stories were recorded, translated and transcribed. Members of the OM team also did Narrative interviews. Sensemaking phase The Sensemaking phase is divided into two phases: the preliminary analysis phase and the collaborative analysis phase. During the preliminary analysis phase, both OM and Narrative teams analysed their data to the point where the first results were emerging. In the OM case, the interview data on achievement of progress markers, use of strategies, lessons learnt and unexpected results was used to extract key issues. These were grouped in categories and super categories. An example of the categories of key Issues that emerged are “Names used to describe the DD”, which included: “Fun tool”, “a resource”, “alternative Internet café”. An example of a super category is “Perceptions of DD”, which included: “Names used to describe the DD”, “DD is unique” and “First impressions created by the DD”. From the preliminary analysis of the metadata (“sticky questions”, “demographic information”, “signifiers”) and the stories the Narrative Team identified the following Themes:

There are multiple reasons for using the DD, no predominant single use

Use for Educational purposes and Games seem dominant

Consistently high perception of usefulness/helpfulness across all age groups

Usage groups:

- 26% never used it, don’t use it anymore - 36% sometimes - 35 % often (seriously engaged)

Bullies cause the most problems at DDs

Most respondents do feel that their quality of life has improved

Most respondents find access to the DD easy In the collaborative analysis phase these two sets of inputs were presented at the first of three “‘Sensemaking Workshops”. The OM results were presented first and a dialogue ensued during which the so-called “Key Focus Points” that emerged from the “Key Issues” were identified and listed. The Narrative team pointed out the “focus points” that were similar to those that emerged from their preliminary analysis and suggested new ones that they had identified. This set of key focus points was used by both teams in the further steps of the analysis process. Testing by the Narrative team of a multitude of questions based on the data gathered from the interviews and the focus points, narrowed down the list of questions to be analysed further. The narrative team interpreted the key focus points during the workshops into a set of “Guiding Questions” for further narrative analysis. A few examples:

Have the users become confident and capable users?

Do the DD champions act as champions of computer literacy to change lives? In order to relate the NE analysis to the OM design as the backbone of the evaluation, the findings from the NE analysis were related to the relevant “Key Focus Points” and “Guiding Questions” which were mapped to the OM Outcome challenges. An example of this mapping is provided below:

Key Focus Points (from OM and Narrative)

Guiding Questions (derived from Key Focus Points)

OM Outcome challenge

Key Focus Point: Perceptions of DD. NE Finding 1: Consistent perception of helpfulness of the DD across all age groups NE Finding 2: Sesotho speakers in primary school in Free State find DD relatively unhelpful. They do have exposure to cell phones, radio, TV.

Have the users become confident and capable users? What are the dominant perceptions of the DD?

User Outcome challenge

Table 1: Mapping Key Focus points to an OM Outcome challenge The key focus points were analysed further by the OM team to provide a systemic view of the interactions between the DD and users and the social systems, such as the school and the community in which these interactions are embedded. This was an attempt to provide another angle on the data in addition to the traditional OM analysis view. The output of the Narrative analysis was mapped to these points and the combination of OM and NE results per point was examined. The final list of Key Focus Points reflects a systems view, showing the circles of influence (indicated in italics) that are related to the DD:

DD users

DD use

Perceptions of DD

DD usage (Dynamics at the DD)

User needs

Influence of DD on user (skill, skills transfer)

Dynamics around the DD

Champions

Systemic influence (Dynamics in community)

DD design

Supporting the DD (technical and champions)

Project implementation strategy Reflection Phase The Reflection Phase had four inputs:

1. The combination of OM and NE results regarding the same set of Key Focus Points. 2. The OM analysis 3. The NE analysis of signifiers (linked to guiding questions from collaborative workshops and

those that emerged independently in the analysis) as well as outputs from discourse analysis as themes and patterns.

4. Triangulation with previous DD-related research and evaluation studies. These inputs were used in the Reflection workshop to develop the conclusions and recommendations. 4. Evaluation Results Evaluation Outcomes An example of the combination of OM and NE results under a Key Focus Points is provided below: DD use - Most users started using the DD out of curiosity which led to ‘unassisted-learning’ experiences which include self-learning through experimentation and ‘peer-assisted’ learning where users teach one another. 90% of users could give an explanation in their own words as to how computer literacy has enhanced their lives. Many users where surprised by the vast amount of information and variety of applications on the DD. Both the OM and narrative enquiry showed that the primary use of the DD is for educational and research purposes as well as for playing games. The DD is also used for NGO projects, publicity and marketing and as an alternative ‘Internet café’. The majority of users (78%) said that they shared their knowledge and experience with other users. This sharing has resulted in the DD user base expanding through word of mouth. It appears as though the DD is perceived as a tool through which users can discover new things that are relevant to them. This is achieved by playing games ‘a lot’ and having fun together. Learners are also given homework which entails searching for information on the DD. The key evaluation outcomes from an OM perspective are:

The introduction of a DD addressed basic computer access, computer literacy, information resource and entertainment needs in the selected sites.

The DD does, at the same time raise expectations of free Internet access that could be problematic to satisfy.

The DD falls outside the ambit of strict hands-on control by authority figures and this created the possibility for new social dynamics including opportunities for peer-assisted and self-directed learning. It also opens up other possibilities: some users expressed a sense of ownership and that the DD was for their free use. Some users reported feeling a sense of empowerment with the DD being a Door Opener to opportunity.

An unexpected outcome was that users mentioned youths being kept busy and “off the streets” and therefore not engaging in petty crime. DDs supply a much needed source of entertainment and interest in resource poor communities.

Volunteer champions do emerge and some champions train learners as champions. These behaviours are however limited and need encouragement and tangible support.

The key evaluation outcomes from an NE perspective include an overall sense that the DD is an effective, positive and meaningful contributor to the expansion of ICT skills in the communities they’re implemented in. The sense of ownership exhibited in the stories is significant. An interesting conclusion emerged that the DD is viewed as having become almost a “persona” in some of the communities where

it was deployed. A DD thus has a certain agency and as such can and does have an impact on the societal dynamics. The implementation of the DD project is thus not a neutral activity. The DD in itself creates a new set of social dynamics and, by function of its purpose, acts as an attractor and potential amplifier for existing social dynamics that then ‘congregate’ around the DD. How did the two approaches complement each other? OM provided a strategic framework for the DD evaluation initiative and aligned this with the evaluation chain. This provided the opportunity to place the evidence gathered and analysis in the context of planning, activity, outputs, outcomes and impact to be able to prepare comments, conclusions and recommendations with justification. We have not reported in detail on these results, since our focus has been on the higher level of abstraction of the OM interview data (e.g. lessons learnt and unexpected results) into categories and super categories of key issues. The use of SenseMaker® enabled the quantitative analysis of the metadata gathered via the NE interviews and resulted in a view on, for example, the percentage of use among the interviewees and the prevalence of knowledge sharing. The tool also allowed the detection of clusters of stories that fall outside the dominant response profile (e.g. the Sesotho speakers in some primary schools in Free State that find the DD relatively unhelpful). This pattern would have been difficult to detect in the OM analysis. The fundamental difference between the two approaches is that normally OM is mainly focused on the data it aims to gather, with key hypotheses formed up-front. NE on the other hand is pre-hypothesis, allowing for emergent findings, themes and patterns, which for this project, enriched the data gathered from OM. Once the two teams started merging the gathered data at the key focus point level, a natural synergy (or collaborative sense making) developed, which is evident in the fact that many of the findings from the Narrative analysis (using both signifier analysis and Discourse Analysis) substantiated the OM findings. 5. Recommendations The alignment of OM with both a strategic framework as well as the evaluation chain creates a framework for justification and verification of progress made within the initiative. This however makes planning, design, implementation, analysis and consolidation of M&E results against evidence in an “after-the-fact” evaluation resource intensive. The planning process (intentional design) from clarifying the vision of the project to the desired behavioural change culminates in the practical evaluation process where evaluators have to start at behavioural change and progress back up to the vision. The question then is where and how the entire process can be streamlined, where it can be accelerated and what the consequences of these actions would be. In projects with smaller M&E budgets, the overall intentional design still has to be done to provide the M&E framework. In the evaluation itself it is possible to accelerate from the recorded behavioural change to their implications at the mission level, without reflecting on key issues as an intermediary process. Similarly, the way in which strategies were applied is linked to the statements made at the mission level, because the strategies and mission statements are constructed by the project drivers. In this way sufficient evidence can still be provided in order to propose what progress has been made towards the vision. The reflection on key issues is, however a key linkage point for collaborative sense making if OM is combined with a NE approach. One of the key issues in the evaluation was formulating a way in which OM and NE could ‘speak’ to each other meaningfully though the project phases. OM, as a methodology is established and fairly well defined, whereas NE is still an emerging discipline, relatively flexible and easily adapted to work in conjunction with more traditional methods. Given the opportunity to conduct a similar evaluation, the project team would aim to see how the two methodologies could co-exist more in the earlier phases of a project and become more symbiotic rather than NE being a follow-on process once OM has been conducted. The evaluation was a point-in-time view of the impact that the DD is having in a certain context. The narrative analysis identified how the stories told by beneficiaries seem to have evolved over time since the DD had arrived, i.e. the stories are dynamic. Conducting a thorough narrative monitoring process to

assess how the stories change over time would be useful in understanding how the impact of the DD changes over time and yields different patterns of usefulness. The combination of these two approaches has resulted in an extensive and rich data set for further research and research has been conducted to develop more in-depth understanding of the uses of the DD and the reasons for these uses (Van de Vyver & Marais, forthcoming). 6. References Babbie, E. (2006). The basics of social research. Belmont, California: Thomson Wadsworth. Case, J. M., & Light, G. (2011). Emerging methodologies in engineering education research. Journal of Engineering Education, 100(1), 186-210. Earl, S., Carden, F., & Smutylo, T. (2001). Outcome Mapping – Building Learning and Reflection into Development Programs. Ottawa: International Development Research Centre (IDRC). Rachel, L. (2006). Guidelines for impact or outcome evaluation. Produced for the Gender and Development Group – PREM of the World Bank. Retrieved from http://siteresources.worldbank.org/INTGENDER/Resources/UNIFEMEvaluationGuidelinesFinal.pdf Cambridge, G. (2008, November). Digital doorway: enriching your mind. Paper presented at Science real and relevant: 2nd CSIR Biennial Conference, CSIR International Convention Centre Pretoria. Retrieved from http://hdl.handle.net/10204/2664. Cambridge, G.L., Smith, R., & Gush, K.L. (2008). Kiosks are breaking through the digital divide in Africa: first among equals. KIOSK EUROPE Summer, 12-14. Retrieved from http://hdl.handle.net/10204/2231 Gush, K., & De Villiers, M.R. (2011, October). Qualitative study on software application usage and user behaviour at South African Digital Doorway sites. Paper presented at the 5th IDIA Conference: ICT for development: people, policy and practice, Lima, Peru. Retrieved from http://www.developmentinformatics.org/conferences/2011/papers/gush.html http://hdl.handle.net/10204/5330 Hinchman, L. P., & Hinchman, S. (1997). Memory, identity, community: The idea of narrative in the human sciences. Albany: State University of New York Press. Herselman, M. E., Smith, R., Gush, K., Cambridge, G., Botha, A., & Marais, M. A. (2010, November). Applying the Digital Doorway design research model in facilitating skills transfer in rural communities. Paper presented at the 4th International IDIA Development Informatics Conference. Cape Town, South Africa. Retrieved from http://hdl.handle.net/10204/4626 Marais, M., Smith, R. Pitse Boshomane, M., Herselman, M., Govender, N., Blignaut, S., Choles, A., & Sebe. D. (2012). Detailed Annual Report: Outcome Evaluation of the Digital Doorway Initiative. (Report to DST). Pretoria: CSIR. Mitra, S. (2000, June). Minimally Invasive Education for Mass Computer Literacy. Paper presented at the CRIDALA 2000 Conference, Hong Kong. Retrieved from http://www.hole-in-the-wall.com/docs/Paper01.pdf Mitra, S. (2003). Minimally invasive education: a progress report on the "hole-in-the-Wall" experiments. British Journal of Educational Technology, 34, 367–371. Mitra, S., Dangwal, R., Chatterjee, S., Jha, Bisht, R. S., & Kapur, P. (2005). Acquisition of computing literacy on shared public computers: Children and the 'Hole in the Wall'. Australasian Journal of Educational Technology, 21(3), 407-426. Polkinghorne, D. (1995). Narrative configuration in qualitative analysis. In J. A. Hatch & R. Wisniewski (Eds.), Life history and narrative (pp. 5-23). London: Falmer.

Smith, R., Cambridge, G., & Gush, K. (2006, February). Digital doorway computer literacy through unassisted learning in South Africa. Paper presented at the CSIR Research and Innovation Conference: 1st CSIR Biennial Conference, CSIR International Convention Centre, Pretoria. Retrieved from http://hdl.handle.net/10204/2676 Stillman, L., Herselman, M., Marais, M., Pitse Boshomane, M., Plantinga, P., & Walton, S. (2010, November). Digital Doorway: Social-Technical innovation for high-needs communities. Paper presented at the 4th International IDIA Development Informatics Conference, Cape Town, South Africa. Retrieved from http://hdl.handle.net/10204/4627 Van de Vyver, A. & Marais, M.A. (2013) Evaluating users' perceptions of the Digital Doorway: a narrative analysis. Accepted for publication in the Special Issue on ICT and Development in Africa, Information Technology for Development, planned publication date is September 2013.