Supporting convergence in groups - TU Delft Repositories

198
Supporting convergence in groups (re-) design and evaluation of two thinkLets for convergence and a technique for similarity detection G.P.J. Duivenvoorde 1096222 Master thesis June 2010 Delft University of Technology Faculty of Technology, Policy and Management Systems Engineering section Systems Engineering, Policy Analysis & Management Graduation committee: Prof.dr.ir. A. Verbraeck Dr. S. G. Lukosch Dr. S.J. Overbeek Ir. C. Lee Dr.ir. G.-J. de Vreede

Transcript of Supporting convergence in groups - TU Delft Repositories

Supporting convergence in groups (re-) design and evaluation of two thinkLets for convergence and a technique for similarity

detection

G.P.J. Duivenvoorde 1096222

Master thesis

June 2010

Delft University of Technology Faculty of Technology, Policy and Management Systems Engineering section

Systems Engineering, Policy Analysis & Management

Graduation committee:

Prof.dr.ir. A. Verbraeck Dr. S. G. Lukosch Dr. S.J. Overbeek Ir. C. Lee Dr.ir. G.-J. de Vreede

Supporting convergence in groups

i

I. Summary To solve complex problems brainstorming with a Group Support System can be a helpful tool. Social comparison and association effects are factors that stimulate the generation of creative and high quality ideas or concepts. Also, the GSS’ capabilities that allow a group to work in parallel contributes to the swiftness with which a fast growing set of ideas is generated. Groups of 10 to 15 are able to produce 100 to 150 ideas in as little as 15 minutes. The set of generated ideas however also has some limitations. Typically the set of ideas is characterized by redundancy, ambiguity, off-topic ideas and a lack of shared understanding. Extracting the key ideas from such a large set of ideas is time consuming and easily cause the facilitator and participants to suffer from cognitive overload. Extracting the key ideas is a process in which the group uses a combination of selecting and summarizing ideas and uses clarification techniques to create shared understanding. To address these challenges, groups and facilitators can benefit from methods and techniques to effectively extract the key ideas from the brainstormed list of ideas, without losing any promising ideas. Such techniques and methods are typically referred to as convergence methods. In interviews with professional facilitators we indeed found that they considered convergence to be difficult and time consuming. Also a large body of literature, mainly GSS case studies, was found that describes hurdles to convergence. These hurdles include (1) information overload at the beginning of a convergence task and (2) the cognitive effort required to complete a convergence task. Therefore the main research question of this thesis is: ‘how can convergence processes become more successful and effective?’.

Based on an analysis of the current set of methods for convergence we identified four opportunities to improve the successfulness and effectiveness of a convergence process. The methods included in this study originate from the ThinkLet library, the method database of the International Association of Facilitators (IAF) and from a literature review. The identified opportunities are:

• Removing the task of detecting redundant concepts away from the facilitator to lower his/her workload.

• Improving the current hurdles that exist when converging in a parallel way, as is done is the FocusBuilder thinkLet among others. The current limitations of this thinkLet include:

o Lack of comprehensiveness of the end result o Inability for the facilitator to monitor the process

• Creating a scalable and fast pre-selection method.

• Improving support for inexperienced facilitators to manage a convergence process in a large group.

Classifying and comparing the methods found was possible by using a classification scheme based on two axes, (1) the output of a method and (2) the way of working implied by a method. By using the example of a creative problem solving workshop, in which we tried to find a match between a method for convergence and a convergence task for different scenarios, we have identified the opportunities for improvement. The differences in the scenarios were in the number of participants and facilitator skill level.

In response to these opportunities we have designed three artefacts. One new thinkLet, Divide&Conquer, is developed that enables large groups to quickly make a pre selection of concepts. Secondly we designed a modifier to the FocusBuilder thinkLet. This thinkLet supports the creation of

Supporting convergence in groups

ii

shared understanding, achieves a reduction of the concepts under consideration and removes redundant concepts from a brainstorm artefact in a scalable and fast way. Thirdly a technique for similarity detection, normally used for plagiarism detection and automatic grading of written texts, is adapted and evaluated for use to detect redundant concepts in a brainstorm or convergence artefact. The technique uses normalized vector representations of concepts based on a thesaurus to detect similar concepts.

To assess the effectiveness and success of the designed artefacts the following process and result oriented metrics are used; process oriented: acceptance, satisfaction, facilitator dependence, scalability, commitment, productivity and efficiency. Result oriented: speed, redundancy, reduction, refinement, comprehensiveness, shared understanding (ambiguity), satisfaction and commitment.

Evaluation in groups of the technique for similarity detection, the new Divide&Conquer thinkLet and the modified FocusBuilder thinkLet revealed that:

• Even with a moderate detection rate of 50% participants are able to remove redundant concepts faster than participants that did not use the artefact in which concepts were ordered according to the automatic detected redundancies. Evaluation however is limited to one case study. Further evaluation is needed to validate the results and research the use of similarity detection within the new and other thinkLets.

• The Divide&Conquer thinkLet can be used within groups to quickly make a pre-selection of concepts that the group deems worthy of paying further attention to. The process and results of the thinkLet were accepted by the participants of two workshops, however the process needs thorough explanation before the start to reach agreement on the process. The thinkLet achieves a pre-selection quicker than other pre-selection methods because in principal less votes than the number of participants are collected per concept. Based on the average value and standard deviation it is decided whether more votes per concept are needed. This increases speed and therefore scalability of the pre-selection process. The pre-selections made in the evaluation workshops with this thinkLet contained only on-topic items and reduced the original brainstorm artefact by 50% on average with a standard deviation of 10%. Besides explaining the process and presenting the results no facilitator efforts are required.

• The modified FocusBuilder thinkLet can be used on a brainstorm artefact directly or after a pre selection has been made. The thinkLet fosters the creation of shared understanding and achieves a (further) reduction in the number of concepts under consideration by removing and summarizing redundant concepts and removing off-topic concepts. The thinkLet uses sub groups of participants that work on sub sets of concepts in parallel and convergence is achieved in three or four rounds. In previous case studies the comprehensiveness of the end result was too low. We removed the first round from the thinkLet, in which the participants work alone, to limit participant bias. Evaluation revealed that the comprehensiveness of the end result increased, without changing any other values that already were positive. Because of the parallel way of working the thinkLet is fast and scalable. Facilitator interventions are needed to explain the process and to present the end result, the real convergence effort is executed by the participants, therefore facilitator dependence of this thinkLet is low. The inability for the facilitator to monitor the process also is an opportunity for improvement of this thinkLet. A design for this is described, but is not evaluated.

Supporting convergence in groups

iii

The outcome of this project is relevant for every professional interested in efficient collaboration within his project team, business unit or organisation. But also for practitioners, facilitators and collaboration engineers, because it proposes solutions for the time consuming step of convergence in GSS supported meetings. Further effort is however needed to evaluate the performance of the two thinkLets within more workshops, but the results indicate that the field of evaluation can be extended to organizations and professionals. Further research is needed to improve the accuracy of detection of redundant concepts and to integrate the detection technique within the two thinkLets mentioned.

Readers with little time can benefit from the following guidelines. If you are interested in more detail about the background, environment and business need identified in this thesis, read chapters 1 and 3. The methodology and research questions are detailed in chapter 2. More detail about the convergence pattern of collaboration and sub patterns as well as definitions from the field can be found in chapter 4. Chapter 5 describes how to measure the success and effectiveness of a convergence method and concludes with a model that can be used for the selection of a convergence method for a convergence task. The analysis and overview of the methods for convergence included in this thesis are described in chapter 6, this chapter concludes with the four opportunities for improvement. In chapter 7 we present the design of three artefacts, answering to the identified opportunities, chapter 8 details the design of a support platform for these artefacts based on the TeamSupport GSS. Chapter 9 describes the evaluation workshops and ends with conclusions in section 9.6. Our overall findings and conclusions are present in chapter 10. Limitations and directions for future research can be found in chapter 11.

Supporting convergence in groups

v

II. FOREWORD In this thesis I describe my graduation project on the convergence pattern of collaboration in GSS support collaborative work. I have been interested in the facilitation of and support for collaboration processes since the end of my bachelor programme. Choosing this subject for my graduation project therefore was no big surprise. I enjoyed executing the project and appreciate the freedom given to me by my supervisors during the project.

The breadth of the scope of the project, for instance designing and modifying two methods and a technique, has benefits but also has disadvantages. Choosing the entire convergence process as the scope enabled us to design three artefacts, but also led to a limited number of evaluations workshops per artefact due to limited resources. Also the choice to be ‘our own problem-owner’ has benefits and disadvantages. Freedom certainly is one of the benefits, but disadvantages include the complexity of describing the relevance and applicability of the research as well as finding a good environment for evaluation.

During the project I learned a lot and also met a lot of interesting people.

I would like to thank Hugo Verheul, Marko Wilde, Gert-Jan de Vreede, Danny van den Boom, Michel van Eekhout, Prof. J.B.F. Mulder and Jan Lelie for the opportunity to interview and discuss with them and for their valuable comments, insights and sharing their knowledge with me. During the project I intensively collaborated with TeamSupport for the realization of the technical aspects of the design. I would like to thank Mike, Andrew and Calvin for their time, support and the fantastic tool that they developed to enable evaluation. I also owe many thanks to Arjen van Kol for giving access to his database of synonyms.

We found opportunities to integrate the evaluation workshops within a course at the university. I would like to thank Jolien Ubacht, Laurens de Vries, Jan-Anne Annema and Marjan Hagenzieker for providing the opportunity and support to make the evaluation workshops happen. Also thanks to the participants of the three evaluation workshops.

During the project I have received a lot of valuable comments and critical reflections from the members of my graduation committee. Their support, assistance and reflection helped me to constantly improve the quality of work. Therefore I would like to thank Alexander Verbraeck, Gert-Jan de Vreede, Sietse Overbeek and Stephan Lukosch. Especially Stephan, thank you for your day-by-day support, comments and reflection, even from abroad.

I hope you enjoy reading this thesis and can benefit from the content.

Gijs Duivenvoorde*

*

June 2010

Email Gijs

Supporting convergence in groups

vii

III. Table of contents

1 INTRODUCTION ..................................................................................................................................... 1

1.1 STRUCTURE OF AND SUPPORT FOR A COLLABORATION PROCESS ............................................................................ 11.2 REDUCE AND CLARIFY PATTERNS OF COLLABORATION ......................................................................................... 21.3 THE NEED FOR CONVERGENCE RESEARCH ......................................................................................................... 31.4 THESIS OUTLINE .......................................................................................................................................... 3

2 RESEARCH QUESTION, SUB QUESTIONS AND METHODOLOGY ............................................................... 5

2.1 RESEARCH QUESTIONS ................................................................................................................................. 52.2 METHODOLOGY ......................................................................................................................................... 5

3 ENVIRONMENT AND BUSINESS NEED .................................................................................................... 7

3.1 PEOPLE & ORGANIZATIONS ........................................................................................................................... 73.2 TECHNOLOGY ............................................................................................................................................. 93.3 BUSINESS NEED ........................................................................................................................................ 11

4 THE CONVERGENCE PATTERN OF COLLABORATION .............................................................................. 15

4.1 REDUCE .................................................................................................................................................. 154.2 CLARIFY .................................................................................................................................................. 164.3 THEORIES AND MODELS OF CONVERGENCE ..................................................................................................... 174.4 SUMMARY: A CONVERGENCE PROCESS .......................................................................................................... 21

5 ASSESSING THE PERFORMANCE OF A METHOD FOR CONVERGENCE .................................................... 25

5.1 DIMENSIONS AND METRICS OF SUCCESS AND EFFECTIVENESS FOR A CONVERGENCE METHOD .................................... 255.2 MEASURING THE SUCCESS AND EFFECTIVENESS OF CONVERGENCE ...................................................................... 31

6 OVERVIEW AND ANALYSIS OF CURRENT METHODS FOR CONVERGENCE .............................................. 35

6.1 THINKLETS .............................................................................................................................................. 356.2 INTERNATIONAL ASSOCIATION OF FACILITATORS (IAF) METHODS DATABASE ........................................................ 446.3 COMPARING THINKLETS AND IAF METHODS ................................................................................................... 476.4 OTHER METHODS FOR CONVERGENCE ........................................................................................................... 486.5 COMMON ELEMENTS FROM THE THINKLET DATABASE, THE IAF METHOD DATABASE AND LITERATURE REVIEW ............. 546.6 OPPORTUNITIES FOR IMPROVING CONVERGENCE ............................................................................................. 54

7 DESIGN: SUPPORT FOR CONVERGENCE ................................................................................................ 59

7.1 DETECTION OF REDUNDANT CONCEPTS .......................................................................................................... 597.2 DESIGN OF A PRE-SELECTION THINKLET ......................................................................................................... 627.3 MODIFICATION OF THE FOCUSBUILDER THINKLET ........................................................................................... 667.4 SUMMARY OF THE DESIGNED ARTEFACTS ....................................................................................................... 69

8 GSS REQUIREMENTS FOR THE DIVIDE&CONQUER AND FOCUSBUILDER THINKLET ................................ 71

8.1 DIVIDE&CONQUER THINKLET ...................................................................................................................... 718.2 MODIFIED FOCUSBUILDER THINKLET ............................................................................................................ 728.3 SCRUM ................................................................................................................................................... 73

9 EVALUATION OF THE DIVIDE&CONQUER THINKLET, MODIFIED FOCUSBUILDER THINKLET AND NORMALIZED WORD VECTORS .................................................................................................................... 77

9.1 EVALUATION GOALS .................................................................................................................................. 779.2 EVALUATION TASKS ................................................................................................................................... 77

Supporting convergence in groups

viii

9.3 GREEN-ICT WORKSHOP ............................................................................................................................. 789.4 ENERGY TRANSITION WORKSHOP ................................................................................................................. 829.5 SHARED SPACE WORKSHOP ......................................................................................................................... 899.6 CONCLUSION AND DISCUSSION OF THE RESULTS OF THE THREE EVALUATION WORKSHOPS ........................................ 92

10 CONCLUSIONS ..................................................................................................................................... 97

11 LIMITATIONS AND FUTURE RESEARCH ............................................................................................... 101

APPENDIX A OVERVIEW OF THE GUIDELINES FOR IS RESEARCH ............................................................. 111

APPENDIX B OVERVIEW OF ARTICLES .................................................................................................... 113

APPENDIX C INTERVIEW WITH DR. H. VERHEUL, TRANSCRIPT ................................................................ 117

APPENDIX D INTERVIEW WITH DIPL. ING. M.WILDE ............................................................................... 119

APPENDIX E INTERVIEW WITH DR. GERT-JAN DE VREEDE ...................................................................... 121

APPENDIX F INTERVIEW WITH DANNY VAN DEN BOOM AND MICHEL VAN EEKHOUT ............................ 123

APPENDIX G INTERVIEW PROF.DR.ING. J.B.F. MULDER .......................................................................... 125

APPENDIX H INTERVIEW JAN LELIE ........................................................................................................ 127

APPENDIX I DETAILED OVERVIEW OF THINKLETS FOR CONVERGENCE .................................................. 129

APPENDIX J DETAILED OVERVIEW OF IAF METHODS FOR CONVERGENCE ............................................. 133

APPENDIX K SCRUM BACKLOG FOR DIVIDE&CONQUER AND FOCUSBUILDER THINKLETS ....................... 137

APPENDIX L EXAMPLE AND EXPLANATION OF NORMALIZED WORD VECTORS ...................................... 141

APPENDIX M USING NORMALIZED WORD VECTORS IN A BRAINSTORM ARTEFACT ................................. 143

APPENDIX N OVERVIEW OF DIFFERENT VARIATIONS FOR 4 PEOPLE VOTE USING A 5-POINT SCALE ........ 145

APPENDIX O DIVIDING CONCEPTS AMONG PARTICIPANTS FOR VOTING ................................................ 149

APPENDIX P DETAILED EVALUATION WORKSHOP DESCRIPTIONS .......................................................... 151

APPENDIX Q AGENDA EVALUATION WORKSHOPS ................................................................................. 159

APPENDIX R RESULT ORIENTED DATA OF THE EVALUATION WORKSHOPS ............................................. 165

APPENDIX S PARTICIPANT COMMENTS TO THE EVALUATION WORKSHOPS .......................................... 167

APPENDIX T DIVIDE&CONQUER THINKLET TEAMSUPPORT MOCK-UP’S ................................................. 171

APPENDIX U FOCUSBUILDER THINKLET TEAMSUPPORT MOCK-UP’S ...................................................... 175

APPENDIX V TEAMSUPPORT SCREENSHOTS: SUPPORT FOR THE DIVIDE&CONQUER THINKLET .............. 179

APPENDIX X CD-ROM ............................................................................................................................ 183

Supporting convergence in groups

ix

IV. List of figures FIGURE 2.1, FRAMEWORK FOR IS RESEARCH, ADOPTED FROM HEVNER ET AL. (2004) ............................................................... 6FIGURE 3.1, ELLIS ET AL. (1991): COLLABORATION SCENARIOS ............................................................................................. 9FIGURE 3.2, DIMENSIONS OF THE GROUPWARE SPECTRUM, ADOPTED FROM ELLIS ET AL. (1991) .............................................. 10FIGURE 3.3, GENERIC CREATIVITY PROCESS MODEL, ADAPTED FROM WARR AND O’NEILL (2005) ........................................... 12FIGURE 4.1, KINCAID’S BIASES IN PAST COMMUNICATION THEORY AND RESEARCH, ADOPTED FROM ROGERS AND KINCAID (1981) .. 17FIGURE 4.2, ADAPTION OF KINCAID’S MODEL OF CONVERGENCE (1981) BY SLATER ET AL. (1994) .......................................... 18FIGURE 4.3, MEDIA SYNCHRONICITY THEORY HYPOTHESIS, ADOPTED FROM DENNIS ET AL. (2008) ........................................... 20FIGURE 4.4, GENERAL INPUT – PROCESS – OUTPUT MODEL FOR A COLLABORATION TASK, (NUNAMAKER ET AL. 1991) .................. 22FIGURE 4.5, INPUT – PROCESS – OUTPUT MODEL FOR A CONVERGENCE TASK ......................................................................... 22FIGURE 5.1, DIMENSIONS TO ASSESS THE PERFORMANCE OF A CONVERGENCE PROCESS ............................................................ 32FIGURE 5.2, UML CLASS DIAGRAM FOR THE INPUT – PROCESS – OUTPUT MODEL ................................................................... 33FIGURE 6.1, THINKLET CONCEPT (BRIGGS & DE VREEDE 2009) .......................................................................................... 36FIGURE 6.2, CLASSIFICATION OF CONVERGENCE THINKLETS ................................................................................................ 39FIGURE 6.3, CLASSIFICATION OF CONVERGENCE THINKLETS, 3 DIMENSIONS ........................................................................... 40FIGURE 6.4, IAF CONVERGENCE METHODS CLASSIFICATION ................................................................................................ 47FIGURE 6.5, COMPARING IAF METHODS AND THINKLETS ................................................................................................... 48FIGURE 6.6, DIALOGUE MAP ELEMENTS, ADOPTED FROM PICTURE IT SOLVED (2006) ............................................................. 51FIGURE 6.7, PARTICIPANT-DRIVEN GSS CONVERGENCE PROCESS ELEMENTS, (HELQUIST ET AL. 2007) ....................................... 53FIGURE 7.1, SPEEDING UP THE VOTING PROCESS, DIVIDE&CONQUER THINKLET ..................................................................... 63FIGURE 7.2, 5-POINT VOTING SCALE .............................................................................................................................. 65FIGURE 7.3, MODIFIED FOCUSBUILDER THINKLET. ............................................................................................................ 67FIGURE 8.1, TEAMSUPPORT DIVIDE&CONQUER MODULE, VOTING ...................................................................................... 75FIGURE 8.2 TEAMSUPPORT DIVIDE&CONQUER MODULE, VOTING RESULTS ........................................................................... 75FIGURE 9.1, ENERGY TRANSITION WORKSHOP IMPRESSION ................................................................................................ 82

Supporting convergence in groups

xi

V. List of tables TABLE 1.1, PATTERNS OF COLLABORATION, ADOPTED FROM READ, RENGER, BRIGGS AND DE VREEDE (2009) ............................... 1TABLE 1.2, THE REDUCE AND CLARIFY PATTERN OF COLLABORATION (BRIGGS, KOLFSCHOTEN, DE VREEDE & DOUGLAS 2006) .......... 2TABLE 3.1, GSS FUNCTIONS BASED ON HAYNE (1999) ..................................................................................................... 11TABLE 4.1, FACILITATOR’S DEFINITIONS ON EFFECTIVE CONVERGENCE. ................................................................................. 15TABLE 4.2, THEORIES AND MODELS ON CONVERGENCE ...................................................................................................... 17TABLE 5.1, COMPARING DIFFERENT DIMENSIONS OF SUCCESS AND EFFECTIVENESS .................................................................. 26TABLE 5.2, CODING VARIABLES, ADOPTED FROM BADURA ET AL. (2010) .............................................................................. 27TABLE 5.3, TIME SPENT ON DIVERGENCE AND CONVERGENCE, ADOPTED FROM BRAGGE ET AL. (2005; 2007) ............................. 27TABLE 6.1, OVERVIEW OF THINKLETS FOR CONVERGENCE ................................................................................................... 38TABLE 6.2, FOCUSBUILDER WORKSHOP DATA FROM DAVIS ET AL. (2008) ............................................................................ 42TABLE 6.3, REDUCTION AND COMPREHENSIVENESS LEVELS FOR TABLE 6.2 ............................................................................ 43TABLE 6.4, OVERVIEW OF IAF METHODS FOR CONVERGENCE .............................................................................................. 45TABLE 6.5, CONVERGENCE METHODS FOUND IN LITERATURE ............................................................................................... 49TABLE 6.6, GOAL FOR CONVERGENCE IN CREATIVE PROBLEM SOLVING .................................................................................. 55TABLE 6.7, CANDIDATE THINKLETS FOR CONVERGENCE WITHIN A CREATIVE PROBLEM SOLVING WORKSHOP ................................. 56TABLE 7.1, MAPPING DIFFERENCES BETWEEN MANUAL CODING AND NWV INTO CATEGORIES ................................................... 61TABLE 7.2 COMPARISON OF AUTOMATICALLY VERSUS MANUALLY DETECTED SIMILARITIES ......................................................... 61TABLE 7.3, AVERAGE AND STANDARD DEVIATION FOR 4 VOTES ON A 5-POINT SCALE ............................................................... 65TABLE 7.4, CLASSIFICATION RULES ................................................................................................................................. 66TABLE 9.1, GREEN-ICT WORKSHOP: COMMITMENT AND SATISFACTION FOR THE FIRST PART OF THE WORKSHOP ........................... 79TABLE 9.2, GREEN-ICT WORKSHOP: COMMITMENT. SATISFACTION, PRODUCTIVITY AND EFFICIENCY FOR THE ENTIRE WORKSHOP. .... 79TABLE 9.3, GREEN-ICT WORKSHOP, LEVEL OF REDUCTION .................................................................................................. 80TABLE 9.4, GREEN-ICT WORKSHOP, LEVEL OF REDUNDANCY ............................................................................................... 80TABLE 9.5, GREEN-ICT WORKSHOP, LEVEL OF SHARED UNDERSTANDING ............................................................................... 81TABLE 9.6, GREEN-ICT WORKSHOP, EXPERT OPINION ON THE CRITICALNESS OF 8 BRAINSTORM CONCEPTS .................................. 81TABLE 9.7, GREEN-ICT WORKSHOP, LEVEL OF COMPREHENSIVENESS .................................................................................... 81TABLE 9.8, ENERGY TRANSITION WORKSHOP, SATISFACTION WITH PROCESS AND RESULT .......................................................... 83TABLE 9.9, ENERGY TRANSITION WORKSHOP, COMMITMENT, EFFICIENCY AND PRODUCTIVITY ................................................... 83TABLE 9.10, ENERGY TRANSITION WORKSHOP, SATISFACTION AND THE USE OF NWV FOR THE INSTRUMENT PART (2 & 3) ............. 84TABLE 9.11, ENERGY TRANSITION WORKSHOP, NWV QUESTION SCORES .............................................................................. 84TABLE 9.12, ENERGY TRANSITION WORKSHOP, EXECUTION TIMES OF THE DIFFERENT ACTIVITIES ................................................ 85TABLE 9.13, ENERGY TRANSITION WORKSHOP, REDUCTION RATIOS ...................................................................................... 86TABLE 9.14, ENERGY TRANSITION WORKSHOP, REDUNDANCY LEVELS ................................................................................... 87TABLE 9.15, ENERGY TRANSITION WORKSHOP, LEVEL OF SHARED UNDERSTANDING ................................................................. 88TABLE 9.16, SHARED SPACE WORKSHOP, PROCESS EVALUATION .......................................................................................... 90TABLE 9.17, SHARED SPACE WORKSHOP, SPEED ............................................................................................................... 90TABLE 9.18, SHARED SPACE WORKSHOP, LEVEL OF REDUCTION ........................................................................................... 91TABLE 9.19, SHARED SPACE WORKSHOP, LEVEL OF REDUNDANCY ........................................................................................ 91TABLE 9.20, SHARED SPACE WORKSHOP, LEVEL OF SHARED UNDERSTANDING ........................................................................ 91

Supporting convergence in groups

Introduction

1

1 Introduction Brainstorming with a Group Support System (GSS) enables the efficient capturing of ideas and comments in large groups. Some of the factors that stimulate the generation of creative and high quality ideas include, but are not limited to social comparison and association effects. Also, the GSS’ capabilities that allow a group to work in parallel contributes to the swiftness with which a fast growing set of ideas is generated. Groups of 10 to 15 are able to produce 100 to 150 ideas in as little as 15 minutes. However, such idea sets often have some limitations. For example, they are typically characterized by redundancy, ambiguity, and a lack of shared understanding. Extracting the key ideas from such a large set of ideas is time consuming and easily cause the facilitator and participants to suffer from cognitive overload. To address these challenges, groups and facilitators can benefit from methods and techniques to effectively extract the key ideas from the brainstormed list of ideas, without losing any promising ideas. Such techniques and methods are typically referred to as convergence methods.

1.1 Structure of and support for a collaboration process A way to structure a collaboration process is to use the patterns of collaboration that have been identified in literature. When a team collaboratively wants to achieve a goal, they must move through a reasoning process. In order to do so, the team must engage in a sequence of basic patterns of collaboration, according to Briggs, de Vreede and Nunamaker (2003). De Vreede and Briggs (2005) show how it is possible to move through the process of goal attainment by using the patterns of collaboration. Table 1.1 lists the names and definitions of the six basic patterns of collaboration. Using these definitions it is possible to categorize every part of a collaboration process. A team could for instance first generate alternative solutions to solve their problem, in a solution finding workshop. After having generated a number of solutions, they could decide to reduce and clarify the list of solutions. After which they evaluate the instrumentality of the different solutions. A final stage of their collaboration process could be to build commitment for one or two of the generated alternative solutions. In this thesis the focus is on reduce and clarify patterns of collaboration.

Table 1.1, patterns of collaboration, adopted from Read, Renger, Briggs and de Vreede (2009)

To assist groups in executing one or a sequence of patterns of collaboration mentioned in Table 1.1, several tools and methodologies are available. An example of a tool is a Group Support System (GSS), e.g. CS Results (2006), Groupsystems.com (2007) and TeamSupport (2007). Methods to support groups and the facilitation of group work are described by Jenkins (2008) in the database of the

Supporting convergence in groups

Introduction

2

International Association of Facilitators (IAF) and by Kolfschoten, Briggs, de Vreede and Appelman (2006) in extensive work on Collaboration Engineering (CE). Within this thesis the use of GSS to support collaboration is assumed.

1.2 Reduce and clarify patterns of collaboration The generate pattern of collaboration is comparatively easy to achieve compared to the reduce and clarify patterns of collaboration. A vast amount of research exists into this pattern of collaboration. Interesting questions regarding tool support, methods, improving efficiency, effectiveness and the way to facilitate are already addressed in numerous studies. In a collaboration task a next step after a generating pattern of collaboration often is a reduce or clarify pattern. Some researcher, like Helquist, Santanen and Kruse (2007), claim that a reduce and clarify pattern of collaboration should always follow a generating pattern of collaboration. Why and when a convergence step is needed, is elaborated upon later in this thesis, but one could think of an example where numerous solutions to an engineering problem are (electronically) brainstormed by a team, and the team’s next goal is to only focus on those solutions that are most appropriate to solve their problem. Here the team needs to reduce and clarify, before they can continue to elaborate, pick the best one or organize the solutions. Reducing and clarifying thus is no means in itself, but a prerequisite to continue the task of finding the solution that solves the engineering problem. In general this is also true; to continue the process of goal attainment a convergence step is needed after a generate pattern of collaboration.

The goal of a reduce and clarify pattern of collaboration is “to reduce the group’s cognitive load in order to address all concepts, conserve resources, have less to think about, and achieve shared meaning of the concepts” definition by Briggs et al. and Davis, de Vreede and Briggs (2003; 2007). For both the reduce and clarify pattern of collaboration, sub patterns are defined. These are listed in Table 1.2, together with the definitions.

Table 1.2, the reduce and clarify pattern of collaboration (Briggs, Kolfschoten, de Vreede & Douglas 2006)

Previously the pattern to denote reduce and clarify patterns of collaboration was called convergence. From the definition by Briggs, de Vreede and Nunamaker (2003) it can be understood that convergence can be split up in two activities, reduce and clarify, ‘to move from a state of having many concepts to a state of having a focus on, and understanding of, the few worthy of further attention’. According to Briggs, de Vreede and Nunamaker (2003) a convergence process has at least two components, the first is an element of filtering, eliminating concepts from consideration. Filtering may be accomplished by eliminating concepts from consideration, or by abstracting multiple specific concepts into a more general concept. The second is an element of understanding,

Supporting convergence in groups

Introduction

3

establishing shared meaning for the concepts under consideration (Briggs, de Vreede & Nunamaker 2003).

1.3 The need for convergence research This thesis focuses on the process of and support for reduce and clarify patterns of collaboration. Studying the reduce and clarify patterns of collaboration is important for a number of reasons. First of all very little studies have been published on this topic, in contrary to the numerous publications regarding the generating pattern (Briggs, Nunamaker, & Sprague, 1997).

Second, from our own experience in facilitating group support system workshops we have learned that it is very easy to let a group generate many ideas, but that it is very hard to process these ideas. This observation is also described in several other studies, for instance in Catledge and Potts (1996), Chen, Liou, Wang, Fan, and Chi (2007), Den Hengst and Adkins (2005), Heninger, Dennis and Hilmer (2006), Herrmann (2009), Nunamaker Jr. (1997), Shen, Chung, Li, and Shen (2004) or Slater and Anderson (1994). Research by den Hengst and Adkins (2007), Chen, Hsu, Orwig, Hopes and Nunamaker (1994) and Easton, George, Nunamaker and Klein (1990) shows that other facilitators of collaboration processes find convergence to be one of the most difficult and time consuming steps of a collaboration process. This is problematic, because convergence is an enabler for other patterns of collaboration. Dennis, Fuller and Valacich (2008) illustrate this, by stating: ‘The key point is that conveying information, deliberating on it, and converging on a shared meaning are necessary communication processes for any task, regardless of its level of equivocality or uncertainty. Without adequate conveyance of information with deliberation, individuals will reach incorrect conclusions’. Dennis et al. (2008) conclude that without adequate convergence, a group cannot move forward. Reasons why convergence is difficult and can be time consuming are described by Davis et al. (2007) and Chen, Hsu, Orwig, Hoopes and Nunamaker (1994). These include (1) information overload at the start of a convergent task, (2) the cognitive effort that is required for convergent tasks and (3) the need for a higher granularity of meeting ideas to be stored (i.e. meeting memory) for future decision making and analysis. More insight into this pattern of collaboration can help in removing the current hurdles regarding the reduce and clarify patterns of collaboration. Further improved insight could help in assisting and training facilitators. Also more insight could lead to better design guidelines for platforms, like GSSs, that support convergent patterns of collaboration.

The main objective for this thesis is to improve the success and effectiveness of the execution of a convergence step. Next to a method or technique the platform needed to support or execute it are described. To gain more insight into the existing set of methods for convergence an overview and analysis is given. To assess the performance of convergence methods objective dimensions and metrics are needed. Possible improvements seem to be in the area of reducing cognitive load, support for large groups and support for less experienced facilitators.

1.4 Thesis outline The next chapter, chapter 2, specifies the research questions to be answered in this thesis and details the methodology to be followed. Chapter 3 describes the environment and corresponding business need in more detail. Chapter 4 gives more background on the reduce and clarify patterns of collaboration, introduces 4 theories relevant for these patterns of collaboration and ends with a summary of the convergence process in general. In the next chapter, 5, dimensions and metrics are introduced that can be used to assess the success and effectiveness of a convergence process in

Supporting convergence in groups

Introduction

4

terms of the process and its result. Chapter 6 gives an overview and analyses of current methods that are available for convergence, the chapter concludes with identifying opportunities to improve and complement the database of methods for convergence. Chapter 7 elaborates on the design of a technique for similarity detection and two methods for convergence, chapter 8 describes the requirements for a tool to support the designed artefacts. Next in chapter 9 the evaluation of the designed methods and technique is described, in section 9.6 the conclusions for all three evaluation workshops are combined and discussed. The thesis ends with conclusions in chapter 10, limitations and directions for future research in chapter 11. Besides in the appendixes the background information of this thesis and all evaluation results as well as the raw and evaluation workshop data is found on the CD-ROM on the last page of this thesis.

Supporting convergence in groups

Research question, sub questions and methodology

5

2 Research question, sub questions and methodology In this chapter the main research question and its sub questions will be introduced. Next the methodology to be followed will be introduced and the guidelines that belong to the methodology will be presented.

2.1 Research questions The central question to be answered in this thesis is: How can convergence processes become more successful and effective?

To evaluate the performance of a method for convergence an objective set of metrics is needed. The formation of this set will be done by answering the following sub question.

1. What are the dimensions of success and effectiveness of a method for convergence and how can they be measured?

To be able to identify opportunities to make convergence more successful and effective an overview and analyses of the current available methods for convergence is needed. This will be done by answering the following two sub questions.

2. Which approaches to execute convergence pattern of collaboration do currently exist?

And

3. What factors contribute to the success and effectiveness of executing a convergence pattern of collaboration in a GSS context?

Based on the answers of the three sub questions above, the main question of this research can be answered. Besides making an addition or a change to the database of methods it is also useful to describe the functionalities that are needed for a platform to support the method. For this thesis this enables evaluation of the method and it adds to the usability of the method(s) in general. Because of the benefits of using a Group Support System (GSS) to support groups in a collaboration task and the large field of applicability of GSS, GSS will be used as a supporting platform. The benefits of GSS will be explained in this chapter.

The fourth sub question is:

4. What capabilities must a GSS offer to support a successful and effective convergence pattern of collaboration?

2.2 Methodology This section outlines the methodology to be followed in this thesis to answers the questions in the section above. The description of the methodology is based on the framework developed by Hevner, March, Park and Ram (2004). Some researcher criticise the Hevner framework and argue for obligatory in field testing of IT artefacts, central placement of IT artefacts and social practices in the research, by using design probes, see among others Rohde, Stevens, Broedner and Wulf (2009). Other researchers make extensive use of the Hevner framework, like Davis et al. (2007). For this thesis the framework is appropriate, since it focuses on the creation and evaluation of innovative and

Supporting convergence in groups

Research question, sub questions and methodology

6

useful IT artefacts, as is discussed by Hevner et al. (2004). According to Carlsson (2006) research in the field of information systems is characterized by two paradigms, behavioural science and design science. The behavioural science paradigm seeks to develop and verify theories that explain or predict human or organizational behaviour. Hevner et al. (2004) explain that the design-science paradigm seeks to extend the boundaries of human and organizational capabilities by creating new and innovative artefacts. Both disciplines are the foundation for research in the field of information systems. Hevner et al. (2004) argue that the combination of the two paradigms leads to successful research in the field of information systems. Hevner and colleagues (2004) provide a framework for information systems research. The framework is visualized in Figure 2.1. Within this thesis this framework will be used to combine the design science and behavioural science paradigm. Design science is chosen for this thesis because it aims at improving practice. The use of this methodology is often referred to as ‘improvement research’. In an effort to improve practice, design science research aims to ‘produce and apply knowledge of tasks or situations in order to create effective artefacts’ Davis et al. (2007).

Figure 2.1, Framework for IS research, adopted from Hevner et al. (2004)

The environment provides the need and relevance for the research and the knowledge base provides the foundations and methodologies for the research. The research itself is an iterative process; building and evaluating are repeated several times. This iterative process results in artefacts that comply with the business needs. Hevner et al. (2004) provide seven guidelines for design science research, the guidelines and their implication for this thesis can be found in Appendix A. Specifying a methodology for this research is important because then the research can be communicated, justified and potentially developed cumulatively, as is exemplified by Gregor and Jones (2007).

Supporting convergence in groups

Environment and business need

7

3 Environment and business need This chapter gives an overview of the environment, in which this research is situated. The chapter is structured according to the Hevner framework for IS research, which is visualized in Figure 2.1. The chapter concludes with a summarizing description of the business need.

3.1 People & organizations The need to collaborate is dominant in current work practices, almost everywhere. Co-workers need to work together to accomplish their tasks and achieve goals. Organizations need to work together to maximize profit and ensure continuity. Students work together in project teams to finish courses and acquire skills, knowledge and their grades.

Facilitation In almost all GSS supported collaboration processes, a (the role of) facilitator is used to guide the group through the process. Research indicates that the facilitator is as important as the tool used for the success of a collaboration process, see for instance the research by den Hengst and Adkins (2007). The tasks of a facilitator range from preparing the process, handling the GSS and executing the process to presenting the results, and all intermediate steps. Facilitation roles and tasks are describe by (not exclusive) Clawson and Bostrom (1997), Kolfschoten, Appelman, Briggs and de Vreede (2004) and de Vreede, Boonstra and Niederman (2002). A facilitator does not need to be a domain expert, but is specialized in guiding groups to a goal. This means that he, among others, can handle group dynamics, is able to resolve conflicts, can set agendas where all participants’ stakes are accommodated, and is able to manage time and quality of the results during the process. This clearly is a complex task and is different every instance since group goals, task complexity and stakeholders’ objectives are different every single collaboration process. Facilitation tasks can be grouped into four categories: (1) Managing the quality of the outcome, (2) managing participant’s interests and relations amongst participants (3) managing resources and (4) self management. Regarding facilitation interventions a distinction can be made between task interventions and interactional interventions, as is described by de Vreede, Niederman and Paarlberg (2001). Research by Kolfschoten, den Hengst-Bruggeling and de Vreede (2007) showed that even experienced facilitators need a significant amount of time to prepare for a GSS intervention and in most cases the workshop does not follow the intended plan. Besides experienced facilitators collaboration processes can also be facilitated by people that have less or no experience in facilitating. In CE these are referred to as practitioners. A practitioner is someone who is not a collaboration engineering expert, but can best be described as a domain expert in a certain field who wants to facilitate a collaborative activity with his work group or colleagues etc. This in contrast to a facilitator or collaboration engineer who are not domain experts, but are specialized in collaboration engineering (Kolfschoten 2007). A practitioner has domain specific knowledge, but is not familiar with the design of collaboration processes, group dynamics, typical pitfalls etc.

Convergence in collaboration processes A reduce and clarify step is needed and useful in some collaboration processes. To give an overview of situations where convergence is used, some examples are given in this section.

The first example shows the application of a reduce and clarify pattern of collaboration in a risk assessment collaboration process. The second example shows the application of a reduce and clarify

Supporting convergence in groups

Environment and business need

8

step to enable voting on a set of concepts. The third example also shows the application of a clarify step only in an information gathering process for crisis response. The last three examples all show the application of a reduce and clarify step in three different situations, for which a general heading could be feedback gathering and prioritization.

ING group, a financial services firm conducts collaborative Risk and Control Self Assessments processes in all of its branches across the world. The collaboration process used contains a step to brainstorm risks for relevant impact areas, the following step is designed to reduce and clarify the list of brainstormed risks, to identify the key risk definitions (de Vreede, et al., 2009).

In an complex aviation related environment a group of engineers wants to exchange best practices about encountered problems in the field of technical design and drawing of parts. The idea is that the same problems occur at multiple locations, so that a database of relevant best practices is useful in finding solutions. The team of engineers is very large and works at different locations across Europe. Once every two weeks a part of the group gathers at one location or in a virtual meeting room. The team then uses a GSS to brainstorm for relevant problems to write a best practice about and add to the database. After the team has brainstormed for topics, they face the task to extract the most important ones to write a best practice about. The list of potential topics contains redundant topics and the meaning of the topics is not necessarily shared among the entire group. This calls for a reduce and clarify step, before the team can express a vote on the importance and feasibility of the topics. Based on the voting a final selection is made. This example is taken from an unpublished internship report written by Duivenvoorde (2007), the internship was situated with EADS in Munich. In the workshop design the convergence step takes 29% of the total time of the workshop.

Example from Appelman and Driel (2005): The crisis response team of the Port of Rotterdam in the Netherlands uses a collaboration process to acquire situational awareness when an accident or incident occurs. At these moments the crisis response team quickly needs information from a variety of sources to get an overview of the size and nature of the incident/accident. Part of this process is a clarify step. In this step the crisis response team member is given the opportunity to select sources or parts of information that are not clear or detailed enough. The goal of this step is to create an overview of which the information the team is missing.

Fruhling and de Vreede (2006) describe a collaboration process to involve stakeholders into a value-based software engineering process. The involvement of several stakeholders is needed to perform usability testing. In this collaborative usability testing process, several convergence steps are present. The first one is intended to formulate key problem statements, after the stakeholders have executed a scenario and have written down the problems they experienced. The second one is intended to highlight the importance of certain issues, after the participants have given feedback on the urgency to address the issues found. In the solution finding phase of the process a convergence step is included to find the key suggestions for resolving the issues found. Before this step the participants are asked to give their suggestions to resolve the issues found.

As a last example two collaboration process designs by Bragge, Tuunanen, den Hengst and Virtanten (2005) and Bragge, Merisalo-Rantanen and Hallikainen (2005) are chosen. One collaboration process was designed to develop a road map in the emerging mobile marketing sector. In this process the various stakeholders first are asked to identify drivers and barriers for mobile marketing in predefined categories. The following convergence step aimed to derive the key barriers by cleaning

Supporting convergence in groups

Environment and business need

9

up the categories by merging and rephrasing the barriers. The other collaboration process was developed to gather end-user feedback for a software system. In this collaboration process convergence was used to formulate the most important feedback and development items, after participants were asked to all write down their feedback and development ideas. In the workshop design of these two collaboration processes the convergence steps take 28% and 33% of the total time of the workshop.

3.2 Technology The computer is now commonplace in the home and office, the combination of computers and other forms of electronic communication leads to new and different forms of interaction between people. As envisioned by Ellis, Gibbs and Rein (1991) the outcome of this technological marriage is the electronic workplace. The study of these kind of systems is part of a new interdisciplinary field: Computer-Supported Cooperative Work (CSCW), Ellis et al. (1991) define CSCW as: ‘CSCW looks at how groups work and seeks to discover how technology can help them work’. In fact, Ellis was right, the electronic workplace came into existence. Computers and electronic communication channels enable a variety of ways in which teams can collaborate. A first classification that can be made regarding the geographic location of the participants and the time. Four different scenarios are possible, as depicted in Figure 3.1. This figure is based on a paper by Ellis et al. (1991), however Johansen (1988) was the first to present this classification.

Figure 3.1, Ellis et al. (1991): collaboration scenarios

The term CSCW is mentioned often together with the term groupware, the latter is defined by Ellis et al. as ‘computer-based systems that support groups of people engaged in a common task (or goal) and that provide an interface to a shared environment’ (Ellis, et al., 1991). A way to get an overview of the breadth of the possibilities is to look at another taxonomy presented by Ellis et al., which is based on the application level of the collaboration tools. The two dimensions identified by Ellis et al. (1991) are visualized in Figure 3.2, a differentiation is made between the extend that tasks are common and the extend that an environment is shared.

One instance of the class of groupware are the Group Support Systems (GSSs), also often referred to as Group Decision Support Systems (GDSSs) or Group Decision Room (GDR). These systems provide computer-based facilities for the exploration of unstructured problems in a group setting (Ellis et al. 1991). Often these systems have a client – server infrastructure and are characterized by anonymity

Supporting convergence in groups

Environment and business need

10

for the participants and the ability to deal with the input from participants in a parallel way. The systems are known to create lock-in effects and can provide the participants with an agenda to provide structure for the activity. Also voting is more easy, because it is automated, see Briggs et al. or Jessup and Valacich (2003; 2003). A large number of (multi-) national organizations use GSSs for their daily meetings and group work, as is discussed by de Vreede, Vogel et al. (2003). De Vreede and Wijk (1997) describe a case of the introduction of a GSS in a large Dutch insurance company, Nationale Nederlanden N.V. The use of GSS lead to a decrease in project time and man hours of 55%. Both the management and employees conclude that using GSS to conduct a meeting is more efficient and productive than a normal face-to-face meeting. The introduction of a GSS with IBM lead to similar results, the costs for man hours in projects decreased with 51% on average. From their results it was calculated that one hour work in a GSS environment equalled 2.3 hours work in a normal face-to-face meeting, see Martz et al. (1992) and Nunamaker Jr. et al. (1989). A study at Boeing, by Post (1993), confirms these results. Meetings are experienced as being more efficient and effective. Besides this the perceived quality of the outcomes is higher. The European organization EADS reports savings in project time up to 33% and reductions of man hours up to 50%. The employees who used the GSS rated their satisfaction with the GSS 4.3 on a 5-point scale, according to field research by de Vreede et al. (2003).

Figure 3.2, dimensions of the groupware spectrum, adopted from Ellis et al. (1991)

Contrary to this, organizations that sell GSSs report declining sales results and GSSs at organizations become abandoned after several years. Agres, de Vreede and Briggs (2005) as well as Duivenvoorde (2008), among others, make notion of this fact. There is evidence that this is due to the complexity of facilitating and operating a GSS. Within organizations it is costly to train employees to facilitate a GSS and those employees also have their own tasks to take care of. Also designing a correct GSS collaboration process requires skills and experience. A balance has to be found between the resources needed and present for the planned GSS workshop. On the market several GSS’s are available for commercial use, examples of which are CS Results GMBH (2006), Facilitate.com (2009), Groupsystems.com (2007), MeetingDragon (2009), Meetingworks (2009), TeamSupport (2007), Teamworks (2009), Zing Technologies (2009), Crealogic (2009) and Synthetron (2006).

Supporting convergence in groups

Environment and business need

11

Due to availability and license cost, it was not possible to work with all of them. Only Nemo2, ThinkTank and TeamSupport (2006; 2007; 2007) were used with real groups for real collaboration processes. For the other tools only (sales related) information and screenshots were available. ThinkTank (TT), Team Support (TS) and Nemo2 were compared based on a set of criteria that were adopted from Hayne (1999).

ThinkTank Team Support Nemo2 Pre: agenda ++ 0 ++ Pre: roster ++ ++ ++ Pre: tool selection ++ 0 ++ During: monitor 0 0 + During: task ++ ++ ++ During: process + + + During: record ++ ++ ++ Post: document ++ ++ ++ Table 3.1, GSS functions based on Hayne (1999)

TT and Nemo2 enable building an agenda beforehand and sharing the agenda with the participants. TT and Nemo2 offer a variety of tools (brainstorming, clustering and voting) that can be used in various configurations. TS has brainstorming and limited clustering functionality. During the process only with Nemo2 it is possible to monitor the activity of the participants. With all three GSS’s it is possible to start and stop a functionality (task) as desired and give digital instructions. During the process it is possible with all three packages to give extra instructions and to modify the process. Only the facilitator can delete or merge items. None of the packages provide facilitation support in the form of guidelines, hints, tips or best practices. All three packages create a log file during a workshop that can be downloaded afterwards. Typically the log files capture all inputs from the participants and the agenda. With regard to convergence the three GSS mentioned offer support for convergence by allowing the participants to express a vote on brainstormed items.

3.3 Business need From the definition of convergence it follows that the input for a convergence process is a (large) set of information of which the meaning and instrumentality not necessarily is shared among all stakeholders that collaborated on the set of information or are affected by it. One point where we encounter this is after a brainstorming session. Helquist, Santanen and Kruse (2007) mention consolidation of brainstormed ideas and the convergence on potential solutions to problems, as applications of convergence steps of collaboration. Badura, Read, Briggs and de Vreede (2009) mention convergence is necessary because ideation activities will generate more ideas than a group will find useful. Reasons why a group generates that much ideas are identified by Osborn (1993), these include the power of association (p. 154) and the stimulative effect of rivalry (p. 154). These two reasons give insight into why the group perceives the generated amount of ideas as being too much. The two reasons indicate that ideas can be similar to each other, can be inspired by each other and can be improved versions of other ideas. Brainstorming is an essential step in creating the solution space in problem solving, as is exemplified by Briggs et al. (1997). Brainstorming (or ideation) is a direct application of creativity for generating ideas. As is shown in the overview by Warr and O’Neill (2005), all creative process models involve a step of generating ideas and evaluating ideas, next to the analysis of the problem and donating. A generic creative process model is visualized in Figure 3.3. It is a cycle that starts with problem preparation and ends with idea evaluation, as is indicated by the arrows in every step it is possible to go one step back. Warr and O’Neill (2005) also

Supporting convergence in groups

Environment and business need

12

provide a generic definition of creativity that can be adjusted to more specific fields or applications: ‘Creativity is the generation of ideas, which are a combination of two or more matrices of thought, which are considered unusual or new to the mind in which the ideas arose and are appropriate to the characteristics of a desired solution defined during the problem definition and preparation stage of the creative process.’

Figure 3.3, Generic Creativity Process Model, adapted from Warr and O’Neill (2005)

Although Osborn (1993) and Taylor, Berry and Block (1958) do not agree on whether one should brainstorm in real or nominal groups, they both agree on the assumption that the larger the number of ideas produced, the greater will be the probability of achieving an effective solution. In later research (the book by Osborn (1993) was first published in 1953), the effects of using nominal versus real groups are further explored and more evidence is found to support the above mentioned assumption of the correlation of quantity and quality (Santanen, Briggs, & de Vreede, 2002). As Briggs (1997) puts it, for instance in problem solving, the quality of the ideas generated, constitutes an upper limit on the quality of the problem solving process. Briggs et al. (1997) argue that problems can be so complex that no person alone has the experience, knowledge and resources to solve the problem. In those cases stakeholders must make a joint effort to achieve their goals, a key part of which is ideation. Santanen and de Vreede (2004) describe that people who have to solve large, complex problems tend to think within a bounded, familiar and narrow subset of the solution space. Furthermore Santanen and de Vreede (2004) state that ‘In complex problem solving, subjects routinely overlook up to 80% of the potential solution space and are even unaware that they are doing so’. Santanen, Briggs and de Vreede (2004) claim that the attributes that make these ill-structured problems so difficult to solve, also make them well suited for creative problem solving.

After the idea generating phase follows a phase of idea evaluation, see Figure 3.3. The goal of evaluation is to extract the instrumentality of the brainstormed ideas. Because a convergence step is needed in between, creative problem solving can serve as a concrete application of convergence throughout this thesis. De Vreede, Briggs and Massey (2009) provide an overview of concrete examples, which are used to exemplify the need and application of a convergence step in a collaboration process.

From the examples above and our own experience we conclude that the necessary convergence step takes around 30% of the time allocated for a workshop. A reduction of the time needed for convergence without compromising its quality allows participants to expend less time on a

Supporting convergence in groups

Environment and business need

13

workshops or use more time for other aspects of the workshop. Therefore the business need that can be identified is the call for swift selection of relevant concepts after a brainstorm activity. The benefits of brainstorming for solution generation, creative problem solving, public participation or evaluation workshops are clear. GSS is an useful platform to support groups in executing their collaboration task. Experiences from the field describe that convergence activities are found difficult and time consuming. This indicates that the knowledge base of methods and tools to support convergence can be improved. Gaps are in the field of information overload, the cognitive effort and time needed to complete a convergence task. Mainly, but not exclusively in large groups with large brainstormed sets of concepts convergence is problematic. The research questions in this thesis are formulated to (1) develop a measurement framework to enable the measurement of success and effectiveness of convergence and enable selection of a suitable method, (2) give an overview to enable analysis of current methods used for convergence and (3) design and evaluate additional methods or changes to existing methods to overcome the hurdles to effective and successful convergence.

Legitimately Hevner et al. (2004) state that ‘the relevance of any design-science research effort is with respect to a constituent community’. The constituent community for this research is composed of every employee interested in efficient collaboration within his project team, business unit or organisation. But also practitioners, facilitators and collaboration engineers belong to the community. The research is relevant to this community because it proposes a solution for the time consuming step of convergence in GSS supported meetings. This convergence step is also characterised by a high level of cognitive load for both the facilitator or practitioner and the participants. In this thesis a solution will be given that aims to minimize cognitive load. The research is also relevant to this community because it removes one of the hurdles for successful GSS implementation within organisations.

Supporting convergence in groups

The convergence pattern of collaboration

15

4 The convergence pattern of collaboration This chapter defines and gives background of the process of convergence. This is done by describing the two components of the convergence pattern of collaboration; reduce and clarify. Further the definition of effective convergence from professional facilitators in the field will be given and discussed. Also four theories relevant for convergence are introduced and discussed. The chapter is part of the knowledge base as described in the Hevner framework for IS research (Hevner et al., 2004). The goal of which is to gather knowledge to create an effective solution. The chapter is input to answer the question ‘which approaches to execute a reduce and clarify pattern of collaboration do currently exist?’ and ‘what factors contribute to the success and effectiveness of executing a reduce and clarify pattern of collaboration in a GSS context?’.

The goal of convergence is for a group to reduce their cognitive load by reducing the number of concepts they must address, as we stated in the introduction. This goal is also acknowledged by five professional facilitators that were interviewed on this topic. Their perception and definitions on effective convergence are listed in Table 4.1. As can be seen in the table, the interviewees mention an element of reduction in their definitions, as well as an element of shared understanding.

Source Definition; effective and successful convergence is…

Marko Wilde ‘A successful convergence would collaboratively turn a whole set of gathered ideas in a collection of meaningful subsets, which allow to work with towards the goal of the session’ (personal communication, October 1st, 2009)

Hans Mulder ‘Consensus on process and outcome, validated by predefined criteria.’ (personal communication, October 1st, 2009)

Hugo Verheul

‘The end result of effective convergence is a set of concepts that the group can continue working with and a set of concepts that has the right level of abstraction with respect to the group goal. A by-product of effective convergence is focus and trust within the group.’ (personal communication, September 28, 2009)

Jan Lelie

‘Is one that where the agreed actions have been executed. I also have clear deliverables from a meeting (like the 5 potentially most successful action, or two ideas for further research, or ...). The output of a successful convergence is a short list with sequenced or prioritized ideas or items.’ (personal communication, October 1st, 2009)

Mike Zhu, TeamSupport

As a problem owner I get a lot of information, do a little bit of thinking and make a great decision, then I will consider convergence a success. (personal communication, April, 2010)

Helquist, Santanen and Kruse (2007), Briggs et al. (2003)

Convergence is the process by which groups identify logical groupings or threads from a myriad of brainstorming ideas or potential solutions. This process effectively decreases and organizes the decision space into a more coherent product. At least three aspects are important; filtering concepts by eliminating or consolidating them, resolving ambiguous or conflicting definitions and synthesizing concepts to identify relationships and threads.

Table 4.1, Facilitator’s definitions on effective convergence.

Table 1.2 visualizes the reduce and clarify patterns of collaboration including their sub patterns and definitions, as described in various publications, for instance see Briggs et al. (2003), de Vreede and Briggs (2005) or Kolfschoten (2007).

4.1 Reduce As can be seen in Table 1.2, the reduce pattern of collaboration can be split up into select, abstract and summarize. The goal of reduction (of a set of concepts) is to lower cognitive load (Davis, et al., 2007). The sub patterns select, abstract and summarize are discussed below. In chapter 6 an overview and further analysis is given of current methods that can be used to enable a select, abstract and/or summarize pattern of collaboration.

Supporting convergence in groups

The convergence pattern of collaboration

16

Select To be able to select concepts from a larger set of concepts, selection criteria are needed. Participants need to reach agreement on the criteria and on the measurement of the criteria. The selection of concepts can be supported in several different ways, roughly ranging from selection based on voting to selection based on plenary discussion. Voting can also be organized in multiple ways. One way is selection based on the average value on one or more criteria; a variety of voting scales can be used for this purpose. A second way is the use of a limited number of checkmarks that all participants can divide among the concepts, based on perceived instrumentality of the concepts. Asking the participants to rank a set of concepts is a third way. Also for the plenary discussion way, more than one possibilities are present. The discussion could be very structured or completely left to the participants.

Abstract Two different types of abstraction are described by Miles-Smith and Smith (1977) in their database-abstraction research papers. Miles-Smith (1977) makes a distinction between aggregation and generalization. Aggregation is defined as ‘an abstraction in which a relationship between objects (concepts in the terminology of this thesis) is regarded as a higher level object’. For example a certain relationship between a type of car, a date, a person and a credit card can be abstracted as car reservation. Generalization refers to ‘an abstraction in which a set of similar objects is regarded as a generic object’ (Smith & Smith, 1977). For example, a set of employed persons can be abstracted as employees. This type of abstraction disregards individual differences between concepts, therefore when abstracting details can get lost, which follows from the definition in Table 1.2. It is also plausible that when participants are abstracting, they both use aggregations and generalizations. We believe that in the collaboration context it does not really matter what the nature of the abstraction is. Abstracting by identifying a certain relationship between concepts (aggregation) or by identifying similarities between concepts (generalization) both lead to a situation where detail is lost. It depends on the situation and the goal of the group whether and to what extend it is desirable to lose details. In the latest book that describes the library of thinkLets by Briggs and de Vreede (2009) the abstracting sub pattern of collaboration is classified under the organizing pattern of collaboration. The reason for this is that abstracting also reveals relationships between concepts.

Summarize The goal of summarizing is capturing the essence of concepts, without eliminating unique concepts. In a recent paper on the effects of convergence in a group, Badura et al. (2009) do not mention summarize as a sub pattern of the reduce pattern of collaboration in an overview of the convergence patterns of collaboration. This is remarkable because it is a very logical way to reduce a large set of information. In order to summarize, concepts need to be rephrased to bring down the total number of concepts without losing the highlights, but omitting unimportant details. It is of course also possible to combine summarizing with making a selection of concepts. It could be a very logical way of working to first select instrumental concepts and secondly to summarize these.

4.2 Clarify The clarify pattern of collaboration is defined as: ‘move from having less to having more shared understanding of concepts and of the words and phrases used to express them .’ Deshpande et al. (2005) define shared understanding as ‘shared understanding is an objected state achieved through interactive processes by which a common ground between individuals is constructed and

Supporting convergence in groups

The convergence pattern of collaboration

17

maintained.’ Shared understanding is obtained through social interaction processes (Nonaka & Konno, 1995). Successful collaboration has shared understanding as one of the outcomes, this is also visualized by the cognitive affective model of communication by Te’eni (2001) and Kincaid’s model of convergence (Rogers & Kincaid, 1981). To understand events and make sense of information, according to Weick et al. (1985) people can use a combination of five possible strategies. The strategies are effectuating, triangulating, affiliating, deliberating and consolidating.

4.3 Theories and models of convergence This section describes four theories and models on convergence or that affect convergence. Table 4.2 gives an overview and short description of each theory or model. In the sections below an elaboration is given.

Theory / model Description Reference Kincaid’s model of convergence

Spiral model that shows the process of converging in a GSS context (Rogers & Kincaid, 1981).

Slater and Anderson (1994)

Dual task interference

Failure of processing new information because the participants’ attention is blocked by another task.

Heninger et al. (2006)

Media Synchronicity Theory

Matching media capabilities to the collaboration task. Dennis et al. (2008)

Cognitive Load theory

Cognitive load (CL) is defined by Sweller, Merrienboer and Paas (1998) as: “the cognitive effort made by a person to understand and perform his task”

Merrienboer and Paas (1998)

Table 4.2, theories and models on convergence

Kincaid’s model of convergence In history a lot of communication models were developed, most of them had in common that they modelled communication as a linear process. Two other assumptions that were commonly made in these models were that (1) information is a physical substance and that (2) individual minds are separate. These assumptions led to seven biases, formulated by Rogers and Kincaid (1981) and visualized in Figure 4.1.

Figure 4.1, Kincaid’s biases in past communication theory and research, adopted from Rogers and Kincaid (1981)

This lead Rogers and Kincaid (1981) to develop his own model of communication, based on two principles; (1) information is inherently imprecise and uncertain, and (2) communication is a dynamic process of development over time. The following citation from Slater and Anderson (1994) describes the two conditions needed for convergence: ‘Mutual understanding consists of shared psychological interpretations of information, and mutual agreement is a shared belief in the validity of those interpretations. Both conditions - mutual understanding and mutual agreement - are necessary for convergence (uniting in common interest or collective action) to occur.’ (Slater & Anderson, 1994). Slater and Anderson (1994) further state that ‘A convergent view of communication acknowledges

Supporting convergence in groups

The convergence pattern of collaboration

18

that communication is a dynamic process rather than a series of discrete events, and that messages are sent and received simultaneously.’ And that ‘convergent models allow for different "realities" on the part of the communicators, based on the participants' past experiences and world view.’ (Slater & Anderson, 1994).

Figure 4.2, Adaption of Kincaid’s Model of Convergence (1981) by Slater et al. (1994)

Slater and Anderson (1994) adapted Kincaid’s model of convergence to visualize the convergence process in a GSS context. Participants constantly and iteratively interpret and express concepts, this is visualized in the spiral shown in Figure 4.2. After a few rounds of expressing and interpreting concepts, shared understanding is created. Slater and Anderson (1994) use the concept of noise to indicate distortion, disturbances or distraction in the communications process. This may be internal or external to the participants and may be physical, physiological or psychological. Slater’s adapted version of Kincaid’s model of conversion sketches a good picture of the process of convergence in a GSS context. The broad concept of noise captures all possible sources for efficiency losses. The objectives in developing a convergence process and a tool to support it are to minimize the number of rounds needed and to minimize the influence and creation of noise. Mutual understanding and mutual agreement need to be present. Examples of ‘noise’ in brainstorm artefacts are redundant and off-topic concepts, as well as concepts with a different level of refinement or ambiguously formulated concepts. Examples of noise during convergence are found in the input, the brainstorm artefact and in the process itself. During the process of convergence focus can be lost because of e.g. hidden agenda’s or the inability of the facilitator to structure and guide discussion.

Dual Task Interference In their paper Heninger, Dennis and Hilmer (2006) describe the problem of failure to process new information received by participants in a GSS workshop. They propose that the cause of this failure

Supporting convergence in groups

The convergence pattern of collaboration

19

does not lie in the social processes, but in the individual cognition of the participants. In their research they have found evidence that dual task interference is the cause for this failure to process new information (Heninger, et al., 2006). Especially in a (divergent) GSS setting the participants’ limited attention has to be divided to adding concepts and comments to the discussion and to reading and commenting on concepts of others. Interference between these two tasks leads to a failure in processing new information added to the discussion by others (Heninger, et al., 2006). In a convergent activity, the attention needs to be divided between describing concepts, merging, deleting, reformulating concepts and selecting, summarizing or abstracting concepts. Several models and theories, such as the single channel theory and multiple resources model, stress the fact that the capacity of the human information processing system is limited (Cellier & Eyrolle, 1992). Cellier and Eyrolle (1992) showed that interruption of a task, in order to carry out another task, will result in errors and increased processing time. Koch (2008) has showed that doing two tasks at the same time leads to worse performance than doing one task at a time. Reasons for this are a ‘central information-processing bottleneck at the level of decision and response selection’, but also ‘the encoding and retrieval of information in short-term memory as well as the cognitive control of task order’ (Koch, 2008). If dual task interference occurs during brainstorming (or another divergence activity), this has implications on the following convergence activity. The occurrence of dual task interference during the divergence activity can make the convergence activity more complex and time consuming. Reason for this is that the participants do not know the content and meaning of the list of concepts and because they do not know the exact meaning it is expected that a lot of redundancy will be present in the list of concepts. To solve this, an activity has to be undertaken to collaboratively describe all concepts and remove redundancy, before the actual convergence can take place. The occurrence of dual task interference during divergence therefore distracts to the efficiency of this activity. The likelihood of dual task interference during convergence is also present, because the participants are asked to perform a complex task with multiple steps. In developing a convergence process and support tool, the aim should be to minimize dual task interference. In a convergence process dual task interference can be minimized by serializing tasks, instead of combining them.

Media Synchronicity Theory Media Synchronicity Theory (MST) is derived from Media Richness Theory (MRT) and focuses on the ability of media to support the communication processes of individuals as they work on tasks (Dennis, et al., 2008). MST argues that ‘communication is composed of two primary processes, 1) the conveyance of information with deliberation of its implications, and 2) the convergence on meaning’ (Dennis, et al., 2008). ‘It also proposes that media have five capabilities which affect the effectiveness and efficiency by which individuals perform conveyance and convergence processes. Media Synchronicity Theory suggests that communication performance will be enhanced when media capabilities are properly aligned with the requirements of these two processes. For conveyance, media providing lower synchronicity (low feedback and high parallelism) are expected to yield better performance. For convergence, media providing higher synchronicity (high feedback and low parallelism) are expected to yield better performance. Established groups and newly formed groups have different requirements for conveyance and convergence and likewise different requirements for media capabilities.’ Cited from (Dennis, et al., 2008). In general, high synchronicity is preferred for convergence.’ (Dennis & Valacich, 1999). The hypotheses formulated by MST are visualized in Figure 4.3. Other research stresses the importance of media richness and the participants’ familiarity

Supporting convergence in groups

The convergence pattern of collaboration

20

with both the medium and their co-participants (Carlson & George, 2004). Also de DeLuca (2005) and colleagues found evidence in an experimental study for hypothesis 1 of the Media Synchronicity Theory. This implies that in general for convergence tasks, media of high synchronicity should be used (high feedback, low parallelism). They also found evidence that the need for synchronicity is less for more established groups (H6).

Figure 4.3, Media Synchronicity Theory hypothesis, adopted from Dennis et al. (2008)

Instead of the six patterns of collaboration defined by Briggs and several colleagues (2006; 2009), MST defines communication into two processes. A distinction is made between the conveyance and convergence of information (Dennis, et al., 2008; Dennis & Valacich, 1999). The difference between communication and collaboration can be seen from the definitions, to sketch the full picture the definitions of coordination and cooperation are also included. The terms collaboration, coordination and cooperation are used interchangeably. In some cases the difference between cooperation and collaboration is not made explicit, which may lead to confusion. Cooperation is the process of reasoning and/or pooling of knowledge in the context of problem solving. Participants in a cooperation process divide the workload between them and each of them has a part to accomplish. The final result is the combination of all individual parts. However, every participant has to interact with the other participants to assure the coherence of the final result. Coordination is described as all rules of action to structure and harmonize cooperation. Thus, coordination is the necessary complement of a cooperation activity (Kolfschoten, 2007). Collaboration is defined as joint effort toward a common goal (Briggs, et al., 2003). Collaboration is also seen as a process in which parties with different perceptions of a problem situation, can constructively explore the differences and can search for solutions that go beyond their own limited perceptions. Unlike cooperation, in a collaborative context all participants co-construct together. Therefore a distinction between individual work and the group work is not possible in the final result. Communication is an essential

Supporting convergence in groups

The convergence pattern of collaboration

21

aspect in cooperation, collaboration and coordination. However, communication is different from the other terms because it does not necessarily have an objective and communication can be used as a mean to cooperate, collaborate and coordinate. Thus, communication is not a finality, but a mean towards an end.

Cognitive Load Theory Cognitive load (CL) is defined ‘as the cognitive effort made by a person to understand and perform his task’ according to Sweller et al. (1998). Cognitive load has two dimensions: tasked based and person based. The tasked based dimension is called mental load and the person based dimension is called mental effort. Paas et al. (1998) define Mental load as ‘the aspect of cognitive load that originates from the interaction between task and subject’. ‘Mental effort is the aspect of cognitive load that refers to the cognitive capacity actually allocated to accommodate the demands imposed by a task’. In cognitive load theory an assumption is made that our short term memory or working memory is limited to seven plus or minus two information elements, see Miller (1967). Our short term memory is thus limited. Cognitive load can arise from three different sources (Baltes, Dickson, Sherman, Bauer, & LaGanke, 2002). This leads to three different forms of cognitive load: (1) intrinsic cognitive load, (2) extraneous cognitive load and (3) germane cognitive load. In literature several ways are mentioned that could be applied to manage, measure, predict and lower cognitive load. It is not appropriate to mention them here, but the interested reader can use Ayres and Gog (2009), Bannert (2002), Heninger, Dennis and Hilmer (2006), Santanen (2005) or Sawicka (2008) as a start point. According to Kirschner (2002) ‘the cognitive load of a facilitation task is determined by the complexity of the task’. In the case of facilitating GSS supported workshops the complexity of the task arises from different sources. There are technical challenges in facilitating, but a facilitator also requires some process and people skills. In literature multiple overviews are given of the tasks a facilitator has to fulfil in order to be successful in facilitating, such as managing time, tracking the quality of the results, dealing with conflicts. More elaborated lists of facilitation tasks can be found in research by Giordano (2005), Tarmizi, de Vreede and Zigurs (2006), den Hengst and Adkins (2005) and Wong and Aiken (2003). The reason why facilitators and participants find convergence difficult and time consuming certainly is related to their cognitive load. For convergence it is often needed to create and overview of longs lists of concepts, to be able to merge and/or select concepts. Cognitive load theory explains why it is found difficult to do so.

4.4 Summary: a convergence process A convergence task can be seen as a process with a corresponding procedure. The process consists of a series of tasks, determined by a procedure. The input for the process is formed by a set of concepts as well is the output of the process. The output can be shared understanding as well, a state of mind among the participants. Using the input – process – output model to describe convergence is very useful to distinguish between the various factors that are involved, see Miller and Rice (1967). The input – process – output model was first described for use within GSS research by Nunamaker, Dennis, Valacich, Vogel and George (1991). Later other GSS research adopted the model, among others Kolfschoten, den Hengst-Bruggeling and de Vreede (2007). The input consists of a group, a task, the context and an electronic meeting system (EMS). These form the input of a process, which delivers an output. This is visualized in Figure 4.4.

Supporting convergence in groups

The convergence pattern of collaboration

22

Figure 4.4, general input – process – output model for a collaboration task, (Nunamaker et al. 1991)

The general model as visualized in Figure 4.4 is intended to model an entire collaboration process, whereas convergence is just a part of a collaboration process. To tailor the model to visualize only a convergence process the following changes have been made, see Figure 4.5. The changed model is explained in the sections below.

Figure 4.5, input – process – output model for a convergence task

Input The input of any convergence process is formed by a list of concepts, mostly a brainstorm artefact, specific characteristics of which will be introduced in the next chapter. We know from Kincaid’s (1981) model of convergence that during the creation of the brainstorm artefact distracting noise can be created. The interference of the two tasks (reading and writing) that are executed by the participants during brainstorming also has negative effects on the quality and usability of the brainstorm artefact. A convergence process always should have a clear task description, examples of which are: ‘remove all redundant items’ or ‘select the top-ten most promising solutions’. The context in which convergence occurs is determined by the group, the facilitator, and the resources available. Resources are formed by the time available and the capabilities of a platform to support the group.

Supporting convergence in groups

The convergence pattern of collaboration

23

Process The process, convergence, is a composition of both patterns of collaboration described in the sections 4.1 and 4.2; reduce and clarify. Which patterns are selected depends on the group goals and task. Kincaid’s model of convergence shows that in order to reach mutual understanding multiple rounds of expressing and interpreting are needed. During these rounds possibly distracting noise can be created (Slater et al., 1994). Dual task interference (Heninger et al., 2006) shows us that it is preferred to serialize tasks to minimize interference of tasks. From Media Synchronicity Theory (Dennis et al. 2008) we learn that for convergence it is best to use a medium with high feedback capabilities and low parallelism. In Cognitive Load Theory the assumption is made that the human working memory is limited to processing seven plus or minus two information elements at the same time (Miller, 1967).

Output The output of a convergence process consists of a set of concepts, the convergence artefact (result from reduction) and ‘clarification’ (result from the clarify effort). Metrics and characteristics will be introduced in the next chapter.

Supporting convergence in groups

Assessing the performance of a method for convergence

25

5 Assessing the performance of a method for convergence To assess the performance of a method for convergence and the convergence collaboration process that it creates, objective metrics are needed. Metrics are derived from the dimensions of success and effectiveness of a convergence method. This chapter therefore answers the question ‘What are the dimensions of success and effectiveness of a method for convergence and how can they be measured?’. The chapter is part of the knowledge base, as described in the Hevner (2004) framework for IS research, visualized in Figure 2.1. While writing this chapter we encountered how thin the line is between describing the knowledge base and making changes to it, to achieve an application of the knowledge. First we compare dimensions of effectiveness and success found in GSS literature with dimensions found during interviews with professional Dutch and German facilitators of collaboration processes. From this comparison we distract the dimension that will be used in this thesis for evaluation. The dimensions are introduced and a way to measure them is described. The chapter concludes with an integration of the success and effectiveness dimensions with the input – process – output model described in the previous chapter. The integration shows to which part of the process or artefact the metrics apply. In future this integrated view can also be used for the selection of a method for a convergence task.

5.1 Dimensions and metrics of success and effectiveness for a convergence method

Success is an umbrella concept and is defined as ‘the appreciation of joint effort and its outcome by relevant stakeholders’ (Duivenvoorde, Kolfschoten, de Vreede & Briggs, 2009). Success is important to include when assessing the performance because the perception of success by participants and the facilitator contribute in a positive way to the willingness to use and reuse a method for convergence. In that point of view the success of a method for convergence can contribute to the sustained use of a GSS facility (Briggs, de Vreede & Nunamaker, 2003). Another important aspect of the performance of a method for convergence is effectiveness. We define effectiveness according to in ‘t Veld (1987) as ‘the real result compared to the intended or expected result’.

Other researchers have already suggested dimensions and corresponding metrics to measure the success and effectiveness of a convergence approach. The dimensions can be split up into two groups, according to Davis, Badura and de Vreede (2008) and Davis, de Vreede and Briggs (2007); a distinction can be made between result oriented and process oriented metrics. The result oriented metrics assess the artefact that is created during convergence; the set of concepts that is the result of the convergence activity. The process oriented metrics assess the process that is used to arrive at the result, the way of working (Badura et al. 2009). From a number of interviews also criteria can be extracted, for abstracts of the interviews see Appendix C to Appendix H. In Table 5.1 all dimensions found in the interviews and in literature are listed to enable comparison. Comparison is possible because the table lists all equal dimension on the same row. From Table 5.1 the following result oriented and process oriented dimensions are selected. Result oriented: speed, reduction, refinement, comprehensiveness, shared understanding, satisfaction with the result, commitment with the result. Process oriented: acceptance, satisfaction with the process. We believe that the dimension ease of use is also relevant, but that is already addressed in the dimensions acceptance and satisfaction. The following paragraphs will discuss the relevance, meaning and use of the metrics

Supporting convergence in groups

Assessing the performance of a method for convergence

26

mentioned in Table 5.1. Not all sources make a distinction between process and result oriented dimensions, to provide structure Table 5.1 is structured according these 2 categories of dimensions.

Source

Davis et al. (2007) and (2008)

Badura et. al (2009)

H. Verheul, (Personal communication 28 September 2009)

M. Wilde, (Personal communication 1 October 2009)

D.J. van den Boom M. van Eekhout, (Personal communication 1 October 2009) & Jan Lelie, (Personal communication October 2009)

Duivenvoorde, Kolfschoten, Briggs and de Vreede (2009) & Hans Mulder, (Personal communication 1 October 2009)

Result oriented Speed Speed Speed Level of comprehensiveness

Comprehensiveness Level of comprehensiveness

Level of shared understanding

Shared understanding

Level of shared understanding

Level of reduction Reduction Level of reduction Level of refinement of outcomes

Refinement Level of abstraction

Level of abstraction

Ability to continue working with the convergence artefact

Ability to continue working with the convergence artefact

Participants committed to take action

Commitment result

Acceptance participants

Consensus on outcome

Satisfaction with result

Process oriented Acceptance participants

Acceptance participants

Acceptance participants

Consensus on process

Ease of use facilitator & participants

Satisfaction facilitator & participants

Satisfaction participants

Satisfaction participants

Satisfaction process

Focus within the group

Trust within the group

Commitment process

Efficiency Productivity Table 5.1, comparing different dimensions of success and effectiveness

5.1.1 Result oriented metrics This section introduces and discusses the result oriented metrics in random order. The dimensions that will be used, are the ones that are presented in Table 5.1 plus the dimension redundancy. The level of redundancy is an important aspect of the result of a convergence activity (and divergence activity) as is explained below. To enable measurement of the result oriented metrics, coding of the converged and brainstorm artefact is necessary. Badura, Read, Briggs and de Vreede (2010) have designed and validated a coding scheme that is useful to calculate the metrics presented below. The scheme uses a number of variables to code the data which are presented in Table 5.2. Besides the

Supporting convergence in groups

Assessing the performance of a method for convergence

27

artefact related dimensions, also the participant perception of the created artefact is important. These can be measured by means of a questionnaire. A questionnaire for this purpose is designed by Kolfschoten and Briggs (2007) and is validated by Duivenvoorde, Kolfschoten, Briggs and de Vreede (2009).

ID Variable name Description RC Raw Concepts Number of concepts in the artefact RC_off Raw Concepts off-topic Number of off-topic concepts RC_on Raw Concepts on-topic Number of on-topic concepts

RC_U Raw Concepts unambiguous Number of concepts deemed unambiguous

RC_A Raw Concepts ambiguous Number of concepts deemed ambiguous

RCD Raw Concepts Disaggregated Number of disaggregated concepts in the artefact

RCD_unique Raw Concepts Disaggregated unique Number of unique disaggregated concepts

RCD_redundant Raw Concepts Disaggregated redundant

Number of redundant disaggregated concepts

RCD_U Raw Concepts Disaggregated unambiguous

Number of concepts disaggregated from RC-U

RCD_A Raw Concepts Disaggregated ambiguous

Number of concepts disaggregated from RC-A

Table 5.2, coding variables, adopted from Badura et al. (2010)

Speed As the number of concepts generated in a divergence activity can be very large, it is desirable for a group to be able to quickly select the concepts that are interesting enough to pay further attention to. The effectiveness of quick (electronically supported) divergence activities is partially lost when the group is not able to quickly make a selection of the concepts to continue with. Speed can be measured by the amount of time it takes for a group to complete the convergence activity. It is also interesting to compare this figure with the amount of time it took to generate the list of concepts. Therefore speed is measured as:

(4.1)

GSS’s are known to speed up collaboration processes e.g. see de Vreede et al. (2003). Especially in the divergent phase of a collaboration process time can be saved because participants can work in parallel. During convergence often no parallelism is used. Badura et al. (2009) report times of 45 minutes to converge a brainstormed list of 129 concepts to 29 concepts. Bragge, Merisalo-Rantanen et al. (2005) mention converging a set of 235 brainstormed concepts to 36 concepts in around 60 minutes (estimated) and Bragge et al. (2007) report convergence from 198 concepts to 31 in 80 minutes. The studies mentioned all used a GSS and the same process for divergence and convergence (FreeBrainstorm and FastFocus thinkLet, see section 6.1), the participants of the workshops were professionals.

# Time used for divergence [min]

Time used for convergence [min]

# brainstormed concepts

# concepts after convergence

Reduction Level [%]

Speed ratio

1 15 45 129 29 78% 3.0 2 35 80 198 31 84% 2.3 3 30 60 235 36 85% 2.0 Table 5.3, time spent on divergence and convergence, adopted from Bragge et al. (2005; 2007)

Supporting convergence in groups

Assessing the performance of a method for convergence

28

Redundancy The output of brainstorming in a GSS context is a set of concepts that partially overlap with respect to the meaning of the concepts. This overlap in meaning of the concepts is referred to as redundancy. The extent to which redundancy is present puts a constraint on the type of follow-up activity and the readability and use-ability of the artefact. For instance voting on a set of redundant concepts leads to an invalid result because the same idea can be present in more than one concept, which leads to confusion when interpreting the voting results. Redundancy can be measured by comparing the variables RCD and RCD-redundant from the coding scheme by Badura et al. (2010). Redundancy can be expressed as a percentage of redundant items, compared to the total number of items.

(4.2)

Reduction One of the goals of a convergence activity is to reduce the number of concepts under consideration. This lowers the cognitive load for the participants and lowers the time to spend on the workshop. In large GSS workshops it is even unavoidable to not reduce the list of concepts before continuing. The level of reduction can be measured using the following formula, which is originally defined by Davis et al. (2007) and is adapted to the coding scheme of Badura et al. (2010) here.

(4.3)

The level of reduction is potentially limited by the level of comprehensiveness.

Comprehensiveness The goal of a convergence activity is to select and summarize concepts and create shared understanding for the concepts that the groups wants to pay further attention to. The inclusion or exclusion of a concept has to be appropriate with respect to the goal of the group. One can think of a situation where convergence was too rough and important concepts were lost, on the other hand it is possible that convergence was not critical enough resulting in a list of concepts that is too long. The level of comprehensiveness therefore reflects the groups’ opinion about the quality of the convergence activity with respect to their goal. It can be measured by questioning the participants or by asking an independent expert for his opinion. In that situation it is assumed that the list of brainstormed concepts contains a number of critical concepts next to a number of redundant, ambiguous and non-critical (and off-topic) concepts (e.g. a joke, a feeling or just non-sense). The goal of the convergence activity is to retrieve those critical concepts, so by comparing the number of critical concepts in the brainstorm artefact with the number of critical concepts in the convergence artefact, the comprehensiveness can be determined (Davis, et al., 2008). It is likely that there is a positive causal relationship between time spend on a convergence activity and the level of comprehensiveness. Davis et al. (2008) measured comprehensiveness using the following ratio:

(4.4)

Supporting convergence in groups

Assessing the performance of a method for convergence

29

The closer this level is to one, the more comprehensive the output. A level of one would imply that all critical concepts from the brainstorm are present in the output of the convergence activity. Zero would imply that none of the critical concepts from the brainstorm activity are present in the output from the convergence activity. A level greater than one would imply that divergence has occurred during convergence. The level of comprehensiveness also tells whether the groups has eliminated concepts that it should not have eliminated (Davis, et al., 2008). In determining whether a concept is critical or not, the opinion of several experts is needed. This expert dependency makes this an difficult metric to calculate in real-time. Davis et al. (2008) report comprehensiveness levels from 16 workshops in a GSS context where a group of undergraduate students brainstormed and converged on the campus parking problem. The levels reported in this study ranged from 0.1 to 0.6 with a mean of 0.3 and a standard deviation of 0.2. In the search for critical concepts, one should include concepts that are on-topic (RC-on), unambiguous (RC-U) and unique (RCD-unique).

Refinement of outcomes In some workshops only reducing and clarifying a list of concepts may not completely match the goal of the workshop. It could also be needed to spend some time elaborating on the selected outcomes to further refine them, for instance to make the concepts actionable or measurable, in some workshops the constraint is used that all concepts should be formulated ‘smart’ (measurable, attainable, relevant and time-bound). Although refining is not really the goal of a convergence activity, it is more a follow up of this step or a characteristic of the divergent activity, it is an interesting criterion to assess the quality of a GSS workshop. The level of refinement can be measured by (1) questioning the participants, (2) questioning the problem owner (workshop owner) or (3) asking an independent expert on the topic to assess the level of refinement. The level of refinement can be expressed as the ratio of the concepts with the right level of abstraction vs. the total number of concepts.

(4.5)

Shared understanding / ambiguity Shared understanding is critical for a group to converge. ‘Shared understanding is an objected state achieved through interactive processes by which a common ground between individuals is constructed and maintained’. From this definition by Deshpande et al. (2005) it becomes clear that using a GSS would be beneficial for creating shared understanding. In a GSS workshop participants interact about ideas in a constructive way using commenting. To successfully converge shared understanding is critical, it should lead to sub optimal results when a convergence activity is executed when not all participants understand the meaning of the concepts to deal with. Shared understanding can be measured during a workshop by questioning the participants. It could also be partially measured by letting an expert watch the behaviour (kind of questions, attitude, and level of participation) of the participants. The creation of shared understanding in a GSS workshop can be hindered by dual-task interference (or attention blocking) (Heninger, et al., 2006). Again it is likely that speed, also of the divergence activity, influences the level of shared understanding. Badura et al. (2009) compared the level of ambiguity in the set of brainstormed concepts with the set of converged concepts. She found a reduction in ambiguity from 45% to 3% after having executed a FastFocus thinkLet with a group of professionals. The decrease in ambiguity can be an indicator for

Supporting convergence in groups

Assessing the performance of a method for convergence

30

the fact that the level of shared understanding has increased. Therefore we will use the level of ambiguity as a proxy for the level of shared understanding.

(4.6)

Satisfaction with the result Satisfaction with the result reflects the participant perception of the result and can be measured by using a questionnaire after the convergence activity took place.

Commitment with the result Commitment with the result reflects the participant perception towards the result. It indicates whether the participant thinks the result of convergence is acceptable, reflects his/her opinion and can be used for further work. It can be measured by using a questionnaire after the convergence activity took place.

5.1.2 Process oriented metrics This section introduces and discusses the process oriented metrics. We include the dimensions that are identified in Table 5.1 and add the following dimensions: commitment with the process, productivity and efficiency. As is suggested by Kolfschoten, Lowry, Dean and Kamal (2007) these three are dimensions for the successfulness of a collaboration activity of which convergence is a part and therefore relevant within this context. This set of process oriented dimensions can be measured by using a questionnaire on participant level. A questionnaire that can be used is developed by Kolfschoten et al. (2007) and is validated by Duivenvoorde et al. (2009).

Acceptance by participants For a convergent activity to be successful and produce valuable outcomes, it has to be accepted by the participants. It is useless, and in some cases even not possible to follow a process with a group that they do not accept. Therefore a minimum level of acceptance by the group is needed before the activity can take place. The exact level of acceptance can be measured by questioning the group or can be derived from the satisfaction level, it will be clear when there is no acceptance (de Vreede, Davison, and Briggs, 2003). Another dimension of acceptance is acceptance of the artefact that is created by the method. Especially when a follow-up activity is planned after convergence, it is critical that the participants accept the artefact that is created during convergence. This latter type of acceptance is referred to as commitment with the result and process. The need for acceptance is also stressed by some of the interviewees, see Table 5.1.

Satisfaction with the process An approach can be considered successful when both the participants and the facilitator are satisfied with the process and with the outcome(s). Satisfaction with the process also reflects the ease of use of the process. For both the facilitator and the participants satisfaction can be measured by questioning afterwards and by observing during the workshop. Briggs et al. (2006) define satisfaction as ‘an emotional response as a result of a perceived shift in yield with respect to personal goals’. Satisfaction can be different for the facilitator and the participants. Satisfaction with the process can be different from satisfaction with the result. Previous research by Duivenvoorde (2008) has indicated a positive causal relationship between satisfaction with the process and satisfaction with

Supporting convergence in groups

Assessing the performance of a method for convergence

31

the result. A pre-requisite for a positive score for satisfaction is that the method is accepted by the participants.

Facilitator dependence Approaches vary on multiple dimensions, one of them is to what extend the approach depends on the skills and experience of the facilitator. If the success of an approach is largely depended on the guidance of the facilitator, this implies that the chances of success will be larger when a more skilled and experienced facilitator is used. Facilitator dependence is to be determined by experts (collaboration engineers or experienced practitioners) on a three-point scale (low, medium, and high). It is very likely that ease of use has a causal relationship with facilitator dependence.

Scalability Next to the time it takes to execute a convergence method, it also is of importance whether the performance of the convergence method is influenced by the number of participants and the number of concepts under consideration. This characteristic is captured in the new dimension scalability. A method is scalable when there is no or little influence in the performance of the method when the number of participants and concepts increase. With ‘performance of the method’ a combination of all other metrics, as mentioned in this chapter is mentioned. Scalability is initially to be expressed on a three-point scale, with the following labels. 1; not scalable, 2; scalable but the following metrics (…) will suffer, 3; scalable. Further refinement to the scale should be made during its usage.

Commitment with the process Commitment with the process reflects whether the participants were willing to expend their time and knowledge to the convergence task. It reflects whether the participants found it useful to perform the task. Commitment can be measured by using a questionnaire afterwards.

Productivity Productivity reflects whether the participants think that the result justifies the efforts made, and is therefore defined as ‘the balance between result-quality and the resources spend’, (Duivenvoorde et al. 2009). Productivity can be measured by means of a questionnaire after the convergence activity.

Efficiency Efficiency reflects whether the participants feel that they have spend more or less resources than planned on the convergence task. It is defined by in ‘t Veld (1987) as ‘the difference in the net amount of resources used compared to the planned or expected amount of resources’. It can be measured by means of a questionnaire after the convergence activity.

The result oriented and process oriented dimensions and corresponding metrics that are listed in the sections above enable the assessment of the successfulness and effectiveness of a convergence collaboration activity. Figure 5.1 gives a schematic overview of all dimensions. The next section presents a UML – diagram to visualize how success and effectiveness can be measured. The model also visualizes to which artefacts or processes the dimensions belong and can also be used in future for the selection of a method for convergence.

5.2 Measuring the success and effectiveness of convergence Because the assessment of the effectiveness requires the comparison of values of metrics before and after convergence we present a UML-model based on the input – process – output model to facilitate

Supporting convergence in groups

Assessing the performance of a method for convergence

32

this comparison. Figure 5.2 visualizes the model and the section below explains it. The comparison of the attributes of the classes TaskDescription and Context with the attributes of the class ConvergedArtefact give an assessment of effectiveness. The model can also be used for other purposes that are listed below. Effectiveness can be seen as a continuum, it is the extent to which the result of a convergence activity contributes to the accomplishment of the goal set for the convergence process. In this thesis effectiveness is defined according to in ‘t Veld (1987) ‘as the real result compared to the intended or expected result’ Using the dimensions, which are summarized in Figure 5.1 (in random order), it is possible to assess the effectiveness of the convergence task afterwards. Questioning and/or observing the participants and coding the created artefact to calculate the result oriented metrics gives an indication of the real result of the convergence activity. To determine the effectiveness one should also capture the intended result in more detail. The sum of the differences between the intended and real value for all metrics determines the effectiveness of the convergence activity. Therefore the effectiveness of a convergence approach can be considered a function of all metrics described in this chapter. To each metric a weight multiplier can be added, to express the relative importance of a metrics for a particular workshop.

Figure 5.1, dimensions to assess the performance of a convergence process

In the previous chapter an input – process – output model was introduced, tailored for convergence processes. This chapter has introduced a number of metrics to measure different relevant aspects of a convergence process. Using the elements of the input – process – output model together with the metrics allows for calculating the success and effectiveness of a method for convergence and facilitates the choice for a method for convergence. To visualize this a UML-class diagram for the elements of the input – process – output model is presented in Figure 5.2. The input of the model is represented as five different classes; BrainstormArtefact, TaskDescription, Context, Facilitator and Participants. Because the UML diagram is static the convergence process is referred to as ConvergenceMethod, which is one of the methods from a database of methods, the convergence process itself is referred to by the class Convergence Process. The output is represented by the class

Supporting convergence in groups

Assessing the performance of a method for convergence

33

ConvergedArtefact. Below the attributes of the classes and their relationship will be described. The purpose of the class diagram is to visualize which dimensions of success and effectiveness and the corresponding metrics are important, can be compared and correspond to which element.

Figure 5.2, UML class diagram for the input – process – output model

Input: BrainstormArtefact The input for a convergence activity is a set of concepts that the group has brainstormed in the same workshop in a preceding activity. This set of concepts, the brainstorm artefact, can be characterized by its size [RC], the level of redundancy [%], the level of refinement [%] and the level of shared understanding (ambiguity) [%].

Input: TaskDescription For every convergence activity a specific task can be specified, an example of which is ‘create a non-redundant list of concepts’, or ‘make sure all participants understand the concepts’. The characteristics of the task can be captured by describing the desired level of redundancy [%],the desired reduction level [%],the desired level of refinement [%], the desired level of comprehensiveness [%] and the desired level of shared understanding (ambiguity) [%].

Input: Context The context puts a constraint on the convergence task and the corresponding methods that can used. This constraint or conditions can be described by using the time available [min], the availability of a GSS [Y/N]. Convergence methods differ in speed and therefore time needed to finish the process.

Input: facilitator The skill level of the facilitator [inexperienced – experienced] is a characteristic that helps in making in match between a method, the convergence task at hand and the facilitator. Every group performing a collaborative task is lead or guided in some way by a chair or head of the meeting. In Collaboration Engineering this person is referred to as a facilitator or practitioner, depended on his/her skill and experience level and background. Skill level determines to a certain extend which method should be chosen for convergence. The methods for convergence are characterized by a

Supporting convergence in groups

Assessing the performance of a method for convergence

34

metric ease of use for the facilitator and facilitator dependency. These two metrics are closely related to the metric satisfaction by facilitator. Facilitator satisfaction is important for continuity.

Input: participants The participant background [string], the group size [# of participants], the affinity of the group with technology [low/med/high] are considered important characteristics of the group of participants. Larger groups usually produce more concepts, resulting in a larger size of the brainstorm artefact. A larger brainstorm artefact and larger number of participants calls for a more scalable method for convergence. A small team and corresponding small brainstorm artefact can use a less scalable method for convergence. Both the team and the facilitator are characterized by their level of affinity with technology, like GSS. Using a GSS with a team that is not familiar with using computers leads to suboptimal results. A fit between the level of affinity with technology and the method and platform is a prerequisite for acceptance and satisfaction of the method for convergence.

Process: ConvergenceMethod Most methods for convergence are listed in a database, two examples of which will be described in chapter 6. To facilitate the choice for a method for convergence, for every method in the database some characteristics need the be known. These include all process oriented and result oriented metrics described in this chapter. The values of the metrics are based on past experiences and can be updated every time a method has been used.

Process: ConvergenceProcess The convergence process, executed according to the selected method for convergence is characterized by all process related metrics.

Output: ConvergedArtefact The output of a convergence process is a list of concepts, the converged artefact. All result oriented metrics described in this chapter can be used to characterize the converged artefact.

According to the definition by in ‘t Veld (1987) to measure effectiveness a comparison should be made between the real result and the intended result. The intended result is described by the all attributes of the class TaskDescription (redundancy, reduction, refinement, comprehensiveness and shared understanding) and the following attributes of the class Context: time available and participant background. The real result is described by the same metrics from the classes ConvergenceProcess and ConvergedArtefact, these are: acceptance, satisfaction (process), speed, redundancy, reduction, comprehensiveness, refinement and shared understanding. Besides effectiveness success is described by the attributes commitment with process, productivity and efficiency from the class ConvergenceProcess and the attributes satisfaction with result and commitment with result from the class ConvergedArtefact. The model can also be used to select an appropriate method for convergence, to select the best method a match should be made between the attributes of the classes BrainstormArtefact, Facilitator, Participants, Context and TaskDescription with the attributes of the class ConvergenceMethod. Hereby it assumed that a database exists that contains all methods for convergence with corresponding realistic values of all attributes of the class ConvergenceMethod.

Supporting convergence in groups

Overview and analysis of current methods for convergence

35

6 Overview and analysis of current methods for convergence Approaches to execute a reduce and clarify pattern of collaboration are documented in various ways, three of which will be presented and analyzed in this chapter. In this chapter the pattern-language-based ‘thinkLets’ from the discipline of collaboration engineering (CE) will be examined, the methods database of the International Association of Facilitators (IAF) will be examined and we will search for methods and approaches documented in relevant literature. This chapter answers the question ‘Which approaches to execute a reduce and clarify pattern of collaboration do currently exist?’ and ‘What factors contribute to the effectiveness of executing a reduce and clarify pattern of collaboration in a GSS context?’. The first question is answered by giving an overview of all methods found in databases and literature, the second question is answered by analyzing and comparing the methods found. Analysis is made possible by using a classification scheme for methods for convergence. The scheme uses the way of working and output of methods for convergence as axis to enable classification. After the classification of the methods found we use an example collaboration process to show the opportunities that exist to make improvements to the process of convergence.

In CE reduce and clarify approaches are documented as thinkLets. ThinkLets, see Kolfschoten (2007) for instance, are reusable building blocks for collaboration processes, and can be grouped according to the pattern of collaboration they achieve. Each thinkLet describes the tool, a configuration and the script. Besides CE also other schools of facilitation and collaboration exist, for instance the Skilled Facilitator Approach (SFA) described by Adkins, Younger and Schwarz (2003), Appreciative Inquiry (AI) described by Cooperrider and Whitney (2009), Social Constructionism and Inquiry-based Learning both described on Wikipedia (2008) and (2009a). Within these schools also best practices and methods are documented. To include these into this study, the methods database of the International Association of Facilitators (IAF) was searched. The IAF (2004) is an association that has the objective to connect and facilitate knowledge exchange between facilitators worldwide. The IAF (2004) now has approximately 1500 members in 63 countries.

Disciplines where collaboration is studied include: GSS research, human-computer interaction, organizational science, psychology, social psychology, design science, education, small group research and others. A literature search within these disciplines was conducted to reveal methods, best-practices, heuristics and case studies.

The remainder of this chapter is structured as follows: first all thinkLets found will be introduced, second all IAF methods found will be introduced and third all other methods / best practices found will be introduced. After the introduction of the methods, a classification and comparison will be made. The chapter concludes with a description of opportunities for improvements that are based on the comparison of the methods found and the convergence method needs in an example convergence step in a collaboration process.

6.1 ThinkLets In this section all thinkLets that enable a reduce and/or clarify pattern of collaboration will be introduced. The discipline of Collaboration Engineering (CE) has identified best practices for each pattern of collaboration. These best practices are called thinkLets, see Briggs et al. (2003). ThinkLets are reusable building blocks that can be used to structure, plan and communicate the design of a collaboration process. ThinkLets are described by a sequence of steps to be taken (the script), the

Supporting convergence in groups

Overview and analysis of current methods for convergence

36

resources needed (tool), things to pay attention to (configuration) and an example. Figure 6.1 visualizes the thinkLet concept. For easy identification thinkLets have catchy names, like LeafHopper or FastFocus. A first source was the book on thinkLets by Briggs and de Vreede (2009). This book summarizes all thinkLets that are developed. Another library of thinkLets is formed by the book by Briggs and de Vreede (2003). These two books partially overlap. Davis et al. (2007) recently designed two new thinkLets for convergence, FastHarvest and FocusBuilder. In journal publications and conference proceedings also thinkLets for convergence were found, including de Vreede, Fruhling and Chakrapani (2005) and de Vreede and Briggs (2005). A detailed overview of all thinkLets for convergence can be found in Appendix I. In this appendix also the in- and output and elements of the thinkLets are described. A condensed version of this table is presented below in Table 6.1, this table lists all thinkLets that are included in our analysis.

Figure 6.1, thinkLet concept (Briggs & de Vreede 2009)

In- and output of thinkLets All thinkLets found assume as input a list of concepts on one or multiple topics, originating from an ideation activity, e.g. brainstorming. To further classify the thinkLets, a distinction can be made regarding the output artefact the thinkLets create. Characteristics of the output artefact that are considered are (1) whether the thinkLet aims to converge the brainstorm artefact to a list of selected concepts or to one statement from the brainstorm artefact, (2) whether the thinkLet aims to remove redundancy and ambiguity from the brainstorm artefact or not. In the first case the convergence artefact is referred to as ‘clean’. These characteristics are chosen because they enable relevant classification.

Elements of the thinkLet in terms of activities The convergence thinkLets can in most cases be broken down into a sequence of pre-described activities (procedure). These activities range from a variety of selection methods to different discussion and presentation methods.

Supporting convergence in groups

Overview and analysis of current methods for convergence

37

# ThinkLet name Pattern Description

Sele

ctin

g

Abs

trac

ting

Sum

mar

izin

g

Clar

ifyin

g

1 BroomWagon X

Brainstorming ideas are selected in order to identify the ones that are worthy of further attention (Davis et al., 2007). This is done by allowing the participants to place a limited number of checkmarks (den Hengst, van de Kar, & Appelman, 2004).

2 GarlicSqueezer X

The facilitator works with assistant to condense the list of brainstorming ideas by selecting contributions that represent the highlights. Each person starts at a different end of the list and works to the middle so that all but the key ideas are squeezed out (Davis et al., 2007).

3 GoldMiner X Participants view a page containing a collection of ideas. They work in parallel, moving the ideas they deem most worthy of more attention from the original page to another page (Briggs & de Vreede, 2009).

4 Reporter X Allowing participants to explain concepts and soliciting input from other participants (Fruhling, Steinhauser, Hoff, & Dunbar, 2007).

5 ReviewReflect X X

The group reviews and comments on the existing content first, in a parallel way. Next, the group discusses the restructuring and rewording of the content in a moderated discussion (Davis et al., 2007; Noor, Grünbacher, & Hoyer, 2008).

6 ExpertChoice X X An expert condenses and summarizes a set of ideas and presents the finalized set to the entire group (Davis et al., 2007).

7 BucketSummary X X To remove redundancy and ambiguity from broad generated items (Nabunkenya, van Bommel & Proper 2008).

8 DimSum X X

Individual members generate candidate statements. Group members identify words and phrases that they like from those statements. Group and facilitator work together to draft a statement from selected words and phrases. If wordsmithing breaks out, process is repeated with current draft as a starting point (Briggs, de Vreede, & Kolfschoten, 2008; Davis et al., 2007).

9 Pin the tail on the Donkey

X X

Group members browse a collection of ideas, often from a Brainstorming session. Group members place a mark by the ideas that they want to continue focus everyone’s attention on. Marked ideas are discussed in a plenary activity (Briggs et al., 2003; Davis et al., 2007)

10 BucketBriefing X X X Categories with ideas are assigned to subgroups and the subgroups clean up the ideas before reporting back to the entire group (Davis et al., 2007; Fruhling et al., 2007) .

11 FastHarvest X X X

Participants form subgroups that are responsible for a particular aspect or category that relates to the brainstorm ideas. Taking a subset of all brainstorm ideas at a time, each subgroup extracts concise and clear versions of ideas that relate to their aspect or category. Every time the subgroup is done with a subset of ideas, they process another subset until they have considered all brainstorming ideas. When all subgroups are done, each subgroup presents their findings to the whole group and clarifies the meaning (not merit) of their extractions if necessary (Davis et al., 2007).

12 FastFocus X X X

Each participant browses a subset of brainstorming ideas. Participants take turns proposing an idea from the collection to be added to a public list of ideas deemed worthy of further consideration. Group discusses meaning, but not merits of the proposed idea. Facilitator add concise, clear version of the idea to the public list (Davis et al., 2007).

13 FocusBuilder X X X

All brainstorm ideas are divided into as many subsets as there are participants. Each participant receives a subset of brainstorm ideas and is tasked to extract the critical ideas. Extracted ideas have to be formulated in a clear and concise manner. Participants are then paired and asked to share and combine their extracted ideas into a new list of concise, non-redundant ideas. If necessary, the formulation of ideas is improved, i.e.

Supporting convergence in groups

Overview and analysis of current methods for convergence

38

the pairs focus on meaning, not merit. Next, pairs of participants work together to combine their two lists into a new list of concise, non-redundant ideas. Again, the formulation of ideas is improved if necessary. The pairing of lists continues until there are two subgroups that present their results to each other. If necessary, formulations are further improved. Finally, the two lists are combined into a single list of non-redundant ideas (Davis et al., 2007).

14 OneUp X X X

Group browses a collection of brainstorming ideas. First participant adds an idea to the public list. For each subsequent addition, the proposer argues why the new idea is better than those already on the list. Facilitator writes a concise, clear version of the idea on the public list. Facilitator also keeps a list of the criteria used by the participants (Davis et al., 2007; den Hengst et al., 2004)

Table 6.1, overview of thinkLets for convergence

6.1.1 Convergence thinkLet discussion Davis et al. (2007) mention the RichRelations thinkLet in their overview of convergence thinkLets. The RichRelations thinkLet aims to find relations between concept according to the description by Briggs, de Vreede, Nunamaker and Tobey (2001) and Briggs and de Vreede (2009). Therefore it is better to classify the thinkLet under the heading of the organize thinkLets. In finding the relations, the thinkLet could however also lead to some reduction, but not necessarily.

When examining the elements and the output of the thinkLets presented, two axes of classification can be extracted. One regarding the way of working and the other one regarding the output the thinkLet creates. The output is either one final statement or a list of concepts. In case of a list, a distinction can be made whether ambiguous and redundant items are removed or not. The way of working during convergence is either completely plenary or a combination of third party work or parallel work with plenary working. Using this classification it is possible to categorize the convergence thinkLets, as is visualized in Figure 6.2. The way of working influences the speed, scalability and facilitator dependence of a method. The output can be matched with the goal and task description of the convergence activity.

Supporting convergence in groups

Overview and analysis of current methods for convergence

39

Figure 6.2, classification of convergence thinkLets

Figure 6.2 only visualizes the physical dimension of the output, the way in which the output is represented. Another dimension of the output is whether shared understanding is created, whether the thinkLet leads to a reduction (by either selecting or summarizing concepts). To also visualize this, a third axes is needed in the diagram. In two dimensions, this is visualized in Figure 6.3. It should be remarked that relevancy of the nine categories for the clarify pattern of collaboration is rather limited, because the in- and output of a clarify pattern of collaboration will always be the same. The clarify pattern of collaboration results in a change of mind. Of course the wording of concepts can be changed or comments could be added to concepts to capture the shared understanding, which changes the output. Also the column third party & plenary in the figure for the reduce pattern raises discussion. A third party can make a selection to reduce a list of concepts, but the moment the selection is presented also (some) shared understanding will be created.

Supporting convergence in groups

Overview and analysis of current methods for convergence

40

Figure 6.3, classification of convergence thinkLets, 3 dimensions

Convergence as a plenary activity Reporter, FastFocus and OneUp thinkLet.

FastFocus is widely used; see for instance the proposed collaboration processes for various purposes designed by Bragge, Merisalo-Rantanen et al. (2005), Bragge, Tuunanen, den Hengst and Virtanen (2005), Bragge et al. (2007) or Nabukenya et al. (2008). Furthermore the thinkLet is very easy to implement, however some effort from the facilitator is required to arrive at the right formulation of a concept or set of concepts and to guide and extract meaning from discussion. Also a technical assistant (sometimes referred to as chauffeur) is needed to create the new and clean list of ideas. FastFocus is relatively slow because the participants can only give suggestions for concepts for the new list turn by turn and consensus on concepts can only be achieved by plenary discussion. Therefore the thinkLet is not scalable and depends on the skills and experience of the facilitator and its assistent. The gain in speed because of parallel working during divergence is removed by this thinkLet. This thinkLet does however satisfy the criteria of Media Synchronicity Theory (MST). OneUp also aims to create a new list, which should be made by the facilitator or technical assistant. Besides this, a list of criteria is created by the facilitator as every participant ads a concept to the new list. Every participant can propose a concept to be added to the new list of concepts, the facilitator is responsible for detecting redundancy and is responsible for the concise formulation of the concepts. The criteria of MST are satisfied, but this thinkLet requires much effort from the facilitator. Typical

Supporting convergence in groups

Overview and analysis of current methods for convergence

41

for this thinkLet is that every concept to be added to the converged artefact should score better on (one of the) criteria, than the previous ones that are added (den Hengst, et al., 2004). This condition is to be guarded by the facilitator.

The FastFocus and OneUp thinkLet are highly similar, the main difference is the condition of the OneUp thinkLet that each new addition to the list, should score higher than the previous ones on (one of) the criteria. The output of both thinkLets is the same.

The Reporter thinkLet differs in this, because the produced list of concepts still contains redundant and ambiguous concepts. The list of concepts also is not reduced; the thinkLet therefore only fosters the creation of shared understanding. It could be applied to the output of all other convergence thinkLets as a modifier to foster the creation of shared understanding. The concept of the modifier is described by Kolfschoten and Santanen (2007). A distinction can be made between a basic methods (thinkLets) and variations of these techniques (modifiers).

Convergence as a third party and plenary activity The ExpertChoice thinkLet moves the convergence task to an expert, this is easy for the facilitator and participants, but requires a trusted expert whose opinion might differ from the opinion of the group. Afterwards a presentation is needed to introduce the participants to the converged artefact.

The GarlicSqueezer thinkLet is similar to the ExpertChoice thinkLet, the difference is that instead of a domain expert the convergence task is executed by the facilitator and it’s assistant. Here some parallel working is present, because the facilitator and assistant each start working at the same time, but at different ends of the list. Somewhere in the middle they meet up and finish.

For both thinkLets the third party (expert or facilitator and assistant) remove the redundancy, ambiguity and check for the right level of abstraction to convert the brainstormed artefact into the converged artefact. The creation of shared understanding with the participants is fostered in a presentation for the participants.

Convergence as a parallel and plenary activity Most of the thinkLets found are classified into this category. Therefore this section is divided into three parts, according to the output the thinkLets create. This can be one single statement, a clean list or a list. With clean we mean that the list does not contain any redundant, similar or off-topic concepts.

Output: One statement DimSum relies heavily on the skills of the facilitator, because the facilitator has to manage a group discussion and has to propose draft statements. DimSum aims at creating one statement from a list of brainstormed concepts. The DimSum thinkLet is also part of the set of thinkLets suited for e-collaboration, according to Briggs and colleagues (2008).

Output: Clean list ReviewReflect only lets the group comment on the existing set of concepts, eventual reformulation and summarizing can only be the result of a later facilitated discussion. ReviewReflect is a good way to create shared understanding, because the group is first given the opportunity to create an overview of the concepts that they do not understand. This is exemplified by case studies described by Harder, Keeter, Woodcock, Ferguson and Wills (2005) and Noor, Grünbacher and Hoyer (2008).

Supporting convergence in groups

Overview and analysis of current methods for convergence

42

The BucketBriefing thinkLet divides the concepts among the participant. Typically subgroups of participants summarize a subset (bucket) of concepts. Then the subgroups present their findings to each other. For the participants this method is straightforward, since they only have to focus on a subset of the concepts. The workload for the facilitator in the first part of the execution of this thinkLet is low. This approach uses parallel working to speed up the convergence process, however presenting the outcomes of the subgroups may be time consuming and can lead to long discussions. Then again guidance of the facilitator is needed to structure the discussion. This thinkLet combines the benefits of parallel working and satisfies the criteria imposed by MST on convergence. A case study description by Fruhling et al. (2007) stresses the importance of giving the correct instructions to the participants. In the described case study the thinkLet did not work, because the participants started to select concepts based on instrumentality instead of removing redundancies and improving formulation. Other case studies report positive on using this thinkLet, for instance see Bragge et al. (2005) or Lowry, Dean, Roberts and Marakas (2009).

The FastHarvest and FocusBuilder thinkLets are designed by Davis et al. (2007), ‘based on the strengths and weaknesses of the existing convergence thinkLets ‘. Both thinkLets aims to select, summarize and to build shared understanding. Davis et al. (2007) report that both thinkLets have been field tested in six facilitated workshops. In these workshops both the facilitator and participants were satisfied with the results and accepted the way of working. FocusBuilder lead to a higher reduction rate than FastHarvest. Based on the case studies Davis et al. (2007) identified strengths and weaknesses for each of the two thinkLets. The opportunities for improvement of both thinkLets are the inability for the facilitator to guide, follow or steer the process until the plenary part starts and the problems that can arise when participants do not understand concepts or are not willing to contribute to the workshop. Both thinkLets combine the speed of parallel working with an activity where shared understanding is created in a plenary discussion or presentation. Literature review revealed only two case studies where only one of the two thinkLets was applied. Tarmizi et al. (2006) describe the successful application of the FocusBuilder thinkLet with a geographically distributed team, weaknesses or improvement points specific to the FocusBuilder are not mentioned. Tarmizi et al. (2006) do conclude that ‘the newness of collaboration tools or process objects can hamper participants from using those tools correctly’ and ‘some level of training for team leaders and/or members might be needed to make them familiar with collaboration tools in terms of process objects’. Davis et al. (2008) report a second case study were the FocusBuilder thinkLet is used. The thinkLet was used by a number of groups of undergraduate students. The data of the workshops that were conducted to evaluate the FocusBuilder are partially copied in Table 6.2.

Workshop Brainstormed concepts

Unique & critical concepts in brainstorm

FocusBuilder concepts

Unique & critical concepts in FocusBuilder

1 53 24 5 5 2 11 8 7 5 3 41 19 4 3 4 29 11 3 1 5 19 13 5 5 6 20 11 5 5 7 29 15 8 5 8 34 22 5 5 Table 6.2, FocusBuilder workshop data from Davis et al. (2008)

Using the data in Table 6.2 it is possible to calculate the level of reduction and level of comprehensiveness. The data also allows calculating the optimal level of reduction. This is the level

Supporting convergence in groups

Overview and analysis of current methods for convergence

43

of reduction that could lead to a comprehensiveness ratio of 1, assuming that the right concepts are selected. Indication that all unique and critical concepts from the brainstorm artefact are included in the convergence artefact.

Workshop Reduction [%] Comprehensiveness [%] Desired reduction [%] 1 91% 21% 55% 2 36% 63% 27% 3 90% 16% 54% 4 90% 9% 62% 5 74% 38% 32% 6 75% 45% 45% 7 72% 33% 48% 8 85% 23% 35% Table 6.3, reduction and comprehensiveness levels for Table 6.2

The levels of reduction are similar to those achieved with other convergence thinkLets, like FastFocus. In these 8 workshops the levels of reduction are too high, meaning that good concepts are left behind. The levels of comprehensiveness are relatively low and thereby indicating an important weakness of the thinkLet. Namely that critical concepts can be left behind, if they are not included by a participant in the first round. In this first round it is nearly not possible for the facilitator to monitor or guide the process. The first round can therefore be seen as the reason for the low comprehensiveness ratios. Removing this first round and directly starting to work in pairs of two will probably change the outcomes with respect to comprehensiveness. The thinkLet then has more similarities with the BucketBriefing thinkLet. However, pairing teams with the objective of combining results is a very clever element of this thinkLet because the benefits of parallel working are used.

Output: List Pin the Tail on the Donkey also requires the facilitator to manage a group discussion and does not eliminate redundancy, because the participants are only allowed to select a number of concepts that they find interesting or promising. Pin the Tail on the Donkey can be used to allow a group to indicate for which concepts more information or background is needed, as exemplified by Appelman and van Driel (2005). In this way the thinkLet can serve as an enabler for creating shared understanding, by indicating where shared understanding is lacking. Harder and Highley (2004) used this thinkLet to let the facilitator (interviewer in their case) request more information from the participants (interviewees) in a situation assessment workshop in the US military. BroomWagon and GoldMiner thinkLets also are very facilitator dependent. See den Hengst, van de Kar and Appelman (2004) for a case study in which the BroomWagon thinkLet is used. The parallel part of the two thinkLets will result in a reduction of the brainstorm artefact toward those concepts that they participants want to spend further attention to. As the classification of these thinkLets implies redundancy and ambiguity is not removed. The creation of shared understanding is fostered in the plenary discussion / presentation part.

6.1.2 Summary of findings from thinkLet review The review of thinkLets that support and enable the convergence pattern of collaboration reveals that multiple thinkLets exist to support and enable both reduce and clarify patterns of collaboration, and combinations of these two in various ways. Furthermore overlap and similarities between the various thinkLets was found. For instance FastFocus and OneUp are similar; they only differ in the condition that OneUp places on new concepts to the final list. Sometime FastFocus is executed in pairs to increase speed, the process then is very similar to BucketBriefing and FastHarvest. Also

Supporting convergence in groups

Overview and analysis of current methods for convergence

44

GarlicSqueezer and ExpertChoice are similar. ExpertChoice could also be executed with more than one expert working on the brainstorm artefact at the same time. We also found similarities between the BroomWagon, GoldMiner and Pin the Tail on the Donkey thinkLets. There only are slight differences in the way the concepts are selected or marked (using checkmarks or physically moving the concepts). All three thinkLets start with a parallel selection activity that for all three could be executed in pairs. In all three cases the thinkLets conclude with a guided discussion. The core of these three thinkLets is cherry picking: asking the participants to select the top most instrumental concepts out of the brainstorm artefact. The sum of these is then input for a moderated discussion where the concepts can be discussed, rephrased, combined or summarized. And finally FastHarvest is similar to BucketBriefing, when every subgroup considers every subset of concepts.

All thinkLets that support and enable the clarify pattern of collaboration, do this by means of a facilitated plenary discussion. Case studies show that the thinkLets that use a parallel way of working benefit from a save in time and depend less on the skills of the facilitator. Parallelism in a thinkLet (or convergence method in general) also leads to scalability, in terms of the number of participants and the number of concepts to be dealt with. When working in parallel another distinction can be made regarding the type of tasks and content the participants are working on. All participants are executing the same task on the same content in parallel, or all participants are executing the same task on different parts of the content in parallel, for instance in the FastHarvest or FocusBuilder thinkLet.

6.2 International Association of Facilitators (IAF) methods database To conclude the search the methods database of the International Association of Facilitators (IAF) was examined. In this public database, facilitation methods and techniques are stored and shared among the IAF community of facilitators, as is described by Jenkins (2008). All methods in the database are documented in the same way by using the structure described and designed by Jenkins, Bootsman and Coerts (2004). Searching the database is possible online, using a search engine on the IAF web site. The initial search revealed 212 methods that matched the initial search criteria, ‘convergence, reduce & clarify and consolidation’. These were set very broad on purpose not to exclude any method in the first step. The entire list was copied to MS Excel and reduced to 32, based on the short descriptions of the methods. These 32 methods were inspected in more detail to further narrow down the selection. Methods that were similar to other methods were excluded and some methods were in the list that do not enable a reduce or clarify pattern of collaboration. After excluding these, 19 methods remain, which are presented in Table 6.4. In the table the name of the method, a short description, and the pattern of collaboration are listed. A more detailed overview can be found in Appendix J.

Supporting convergence in groups

Overview and analysis of current methods for convergence

45

# Name Pattern Description

Se

lect

ing

Abs

trac

ting

Sum

mar

izin

g

clar

ifyin

g

1 2x2 Value Matrix X Evaluating and selecting a concept from a list, based on the score on 2 criteria.

2 3x3 Value Matrix X Evaluating and selecting a concept from a list, based on the score on 3 criteria.

3 Affinity Diagram X X Collaborative clustering of concepts and finding general heading for the clusters.

4 Ballooning method

X X All concepts are written on balloons. By bursting balloons a selection is made.

5 Build up X X X

After generating a list of alternatives, someone is asked to name one that might work. The facilitator asks: ‘Is there anyone who can NOT live with this one?’ If there is anyone, ask for changes that could help everyone live with it.

6 Clustering in columns method

X X All ideas are clustered into columns, after that the columns are given titles. All columns have a symbol. When clustering is finished, the symbols are replaced by a name.

7 Symbol gestalt method

X X Similar to the ‘clustering in columns’ method. Only now the symbols are added to the concepts instead of vice versa.

8 Delphi method X X Anonymous experts summarize and remove redundancy from a list of brainstormed concepts.

9 Discerning priorities with colours

X Sticking coloured dots to concepts. Every dot has a different colour and represents a different criteria. The participants are given a limited number of dots per colour.

10 Evaluation by values

X X Collaboratively assign values (based on predefined criteria) to a set of concepts. Based on the evaluation concepts are selected.

11 Gallery tour or walk

X X X Concepts are brainstormed in sub groups; each group presents their concepts on a separate screen or whiteboard. The groups visit each other and the concepts are presented.

12 Multi voting X Same as ‘Discerning priorities with colours’.

13 Nominal Group Technique

X Divergence method where clarification of concepts is stimulated by the group in the form of questions.

14 Paired Comparisons

X Making pair wise comparisons between a limited set of concepts to select the most suitable one.

15 Pin Cards X X Not anonymous way of brainstorming concepts, followed by sorting the concepts in columns to group them and remove redundancy.

16 Plus, Minus, Interesting

X X Describing and eventually selecting a concept from a large list by collaboratively describing the plus, minus and interesting points per concept.

17 Polar Gestalt method

X X X The concepts are brainstormed on post it’s and placed on a wall to show relations and similarities. In the end all concepts that are grouped together are given one name.

18 The Hundred Dollar Test

X Participants allocate 100 dollars between the concepts.

19 TRIZ X X X Large set of principles to guide a group. License needed. Table 6.4, overview of IAF methods for convergence

6.2.1 Discussion of IAF methods found The first two methods that were found in the IAF database use a value matrix with two or three dimensions to rank and select concepts from a list. Although this is a very plausible way of selecting, the method does not mention a way to remove redundancy and to clarify concepts. Method #4 essentially also is a selection method. The method adds to creativity, but in practice will be difficult to facilitate. Methods #9 and #12 are also selecting methods, they use multiple colours to represent multiple criteria. In no way the redundancy and unclearness of the output of a GSS supported

Supporting convergence in groups

Overview and analysis of current methods for convergence

46

divergence activity is taken into account in this method. The selection and clarifying method #10, does take this into account because the values are assigned to the concepts in a collaborative way. This however is time consuming and makes the success of the method depend on the skills of the facilitator. Method #18 also is a selection method.

The use of an Affinity Diagram (method #3) uses subgroups of participants to group a subset of brainstormed concepts. The concepts are to be written on Post-Its to allow regrouping. In a second round, for all groups a summarizing heading is created. This method uses parallel working in the first round to speed up the process. In a second round shared understanding is created by collaboratively defining the heading for each group of concepts. In the method documentation nothing is described on how to deal with redundancies. The method also depends on the skills of the facilitator to guide and focus discussion.

The Build up method (#5) has similarities with the FastFocus thinkLet. The method however focuses to formulate one final statement from the list of concepts. Its success depends on the knowledge, skills and facilitator to deal with large amounts of information and to guide group discussion. The method does not use parallelism to increase speed.

The clustering in columns method (#6) uses columns to let the participants cluster their brainstormed concepts. For this purpose, the concepts are written on Post-It’s. Each of the participants is asked in turn to place one or two of their most concrete concepts in one of the predefined columns on a wall. After this all participants together try to place their other concepts in the same columns. The method depends on the skills of the facilitator in the ‘naming the column’ face and during the discussion phase. The way to deal with redundancy is to place the Post-It’s concerned on top of each other. The idea to ask the participants for their best and most concrete concept is similar to the FastFocus thinkLet and might be a good start to create clusters (columns). Method #7, Symbol Gestalt, is similar to Clustering in Columns (#6). The pin cards method (#15) also is similar, as is the polar gestalt method (#17).

The Delphi method (#8) uses independent experts to evaluate, select and clarify a set of concepts. This is similar to the ExpertChoice thinkLet.

Method #11, Gallery Tour, has similarities with the FocusBuilder thinkLet. The difference is that the concepts are also brainstormed in subgroups. This could be convenient when it is possible to divide the participants according to expertise or when it is possible to divide the topic of the workshop in several sub topics. The method uses parallelism and does not depend on the skills of the facilitator.

Nominal Group Technique (NGT, #13) lets participants brainstorm individually. All individually brainstormed concepts are combined in a plenary activity, led and guided by a facilitator. In this round the creation of shared understanding is encouraged by the facilitator. No attention is yet paid to the instrumentality of concepts, only questions regarding meaning are allowed. In a next round, each participant can give points for each concept. The sum of the points leads to a classification of the concepts. The way the concepts are created differs from what is common in a GSS context. Putting together concepts has similarities with the FastFocus thinkLet. The method does not use parallelism in the convergence phase.

Supporting convergence in groups

Overview and analysis of current methods for convergence

47

Using pair wise comparison of concepts (#14) is only possible when the number of concepts is low and when the concepts do not overlap and are clear to all participants. Since this is not the case in a GSS context, this method does not serve the purpose of this thesis.

Method #16 (plus, minus, interesting) has similarities with the CrowBar and OneUp thinkLet (Briggs & de Vreede, 2009). The facilitator workload is high, because the facilitator has to make the comparison of all the concepts based on the input of the group. The method also is time consuming, because in a plenary way the participants are asked to name and come up with three different aspects per concept.

Unfortunately it was not possible to get insight into the TRIZ toolkit (#19).

Classification Using the classification scheme that we have defined in the previous section, it is possible to classify all IAF methods found. This is visualized in Figure 6.4.

Figure 6.4, IAF convergence methods classification

6.2.2 Summary of findings from IAF methods database review In the IAF methods database also multiple methods are present that support a reduce and/or clarify pattern of collaboration. Five reduction methods were found that all use voting as a selection method. In parallel all participants vote on all concepts by using one or more criteria. The voting methods and number of criteria used differ, but all five methods make a pre-selection of the concepts under consideration. The IAF plenary reduce & clarify methods use a plenary discussion to create shared understanding. Either a plenary discussion or voting method is used to make a selection of concepts. Two methods are found that use a parallel way of working to reduce a list of concepts and to create shared understanding.

6.3 Comparing thinkLets and IAF methods The thinkLets and other methods found differ too much with regards to in- and output to enable direct comparison. It is possible to compare the IAF methods and thinkLets that support the same output and have the same way of working. To visualize the comparison Figure 6.3 and Figure 6.4 need to be placed next to each other, this is done in Figure 6.5. The matrix for the clarify pattern of collaboration is not displayed, because no IAF method was found that only supports that pattern of collaboration.

Supporting convergence in groups

Overview and analysis of current methods for convergence

48

Figure 6.5, comparing IAF methods and thinkLets

The comparison clearly shows similarities between thinkLets and the IAF methods. It can be concluded that the IAF database has methods for reducing & clarifying towards a list in a plenary way, in the thinkLet database such methods do not exist. Both the thinkLet database and IAF database contain no methods to support the reduce & clarify pattern of collaboration in a third party & plenary way. This is logical, because clarification cannot be achieved by a third party alone. Both databases do contains methods to support the reduce pattern of collaboration in a third party & plenary way. A difference is that the IAF database does not contain methods to arrive at one single statement. When considering the reduce pattern of collaboration there are some slight differences in selection methods, but the basic idea of first making a selection of the most promising concepts is present for both thinkLets and IAF methods. The IAF methods introduce the use of more than one criterion and the use of a scale to express instrumentality for making a selection. This is also true for using a third party for making a selection and cleaned up version of the brainstorm artefact.

Elements that are found in both databases of methods are the use of voting to make a pre-selection of concepts. The voting methods and number of criteria used differ. All methods that foster the creation of shared understanding do this by means of a plenary (facilitated) discussion. Both databases contain methods that use parallel groups to make summaries and selections of sections of the brainstorm artefact.

6.4 Other methods for convergence In the search for articles describing facilitation techniques, for both GSS supported and traditional face-to-face meetings, a lot of case studies were encountered. Below the results will be summarized,

Supporting convergence in groups

Overview and analysis of current methods for convergence

49

with a focus on methods, approached and techniques to support or enable a reduce and/or clarify pattern of collaboration. The purpose of the literature review was to extract approaches and methods for convergence used in meetings and workshops other than thinkLets or the IAF methods. The approaches found were various and were applicable in a variety of fields, such as education (convergence of knowledge), engineering (convergence of designs) and software design. The criteria used for selecting an article was that the approach could be used in a GSS context, focused on the problem of reducing and clarifying on a set of concepts and that it was not a thinkLet or IAF method. In Appendix B a complete overview and summary of all examined articles can be found. The articles that did not report any method or theory on the topic of convergence are left out of the appendix.

Findings: methods and tools for convergence All studies that described a case study or experiment, acknowledged the difficulty of converging on a list of concepts. These include: Catledge and Potts (1996), Chen et al. (2007), den Hengst and Adkins (2005), Heninger et al. (2006), Herrmann (2009), Nunamaker Jr. (1997), Shen et al. (2004) and Slater and Anderson (1994). Some studies only mentioned the problem, others describe solution directions. Table 6.5 gives an overview of the solution directions found.

Method Description Reference

Tagging Using tag’s to classify generated ideas and to enable content analysis.

Vivacqua et al. (2008)

Using the facilitator The success of the convergence activity depends on the experience and knowledge of the facilitator.

Shen et al. (2004), Samaran (2007), Dean et al. (2000), Chen et al. (2007)

Graphical representation, allowing multiple views

GSS should provide multiple views on information (high level – detailed level), context information should also be available. User should be able to switch between views independently.

Nunamaker Jr. (1997)

Dialogue mapping Making a graphical representation of the dialogue.

Montibeller et al. (2006), Herrmann et al. (2009)

Causal mapping Making a graphical representation of the dialogue, with a focus on causal structure of the dialogue.

Deshpande et al. (2005)

Search function for the facilitator The facilitator is given a search function to search and find concepts in the list.

Chen et al. (2007)

Participant-Driven GSS

(1) evaluate input, (2) correct input, (3) combine redundancies, (4) Cluster ideas in to threads, (5) name and rename threads, (6) summarize threads.

Helquist et al. (2007)

Table 6.5, convergence methods found in literature

In the next sections, all methods mentioned in Table 6.5 will be elaborated upon.

Tagging Vivacqua and colleagues describe the use of tagging during brainstorming. A tag is, according to Wikipedia (2009b) ‘a non-hierarchical keyword or term assigned to a piece of information (such as an internet bookmark, digital image, or computer file). This kind of metadata helps describe an item and allows it to be found again by browsing or searching’. Examples of the use of tags are found in the publication of scientific articles, every author is requested to provide a number of key words (tags) to classify the paper. The keywords are used for easy retrieval and storage of publications. Also the 2007 version of the Microsoft Office Suite uses tags to categorize and identify files. Every time a new

Supporting convergence in groups

Overview and analysis of current methods for convergence

50

file (document, presentation, spreadsheet etc) is saved, the user can specify keywords. In this way the users create a library of keywords, that enables the classification and retrieval of files. Vivacqua and her team created a version of the FreeBrainstorm thinkLet where every concept added by the participants also had a tag attached. The participants can create their own tag or use a tag from the library (created by other participants, in the same of even from previous workshops). Also the concepts added were categorized with a tag into one of the following four categories (1) new idea, (2) derived idea, (3) pro and (4) con, as described by Vivacqua et al. (2008). According to Vivacqua et al. (2008) the tags can be used for content analysis and can provide a basis for the classification of the concepts generated. Vivacqua et al. (2008) propose a number of measures to give the facilitator insight into the group performance. The measures can be grouped into time based en content based measures. The time based measures include: Group Participation Rate, Distribution of contributions, Individual Participation Rate, Idea Flow and Attention Allocation. The content based measures include: Idea Discussion, Global Tag Growth Rate, Interpersonal Agreement or Disagreement, Individual Positioning, Divergence level and Number of Derived Ideas. Vivacqua et al. suggest that for convergence tasks the following measures could be of use: Idea Flow (time elapsed between ideas) , Idea Discussion (responses generated by each idea), Idea Distinction (indicator of difference of ideas), Interpersonal Agreement or Disagreement (pros and cons submitted by one user in relation to another), Divergence Level (number of different ideas being generated). From her research no experimental data is available yet. Vivacqua et al. (2008) argue why the mentioned measures can be of importance for a facilitator that guides a convergent pattern of collaboration with a group. Idea flow and Idea discussion can only be measured during the divergence activity and provide guidance for this activity. These two measures can provide the facilitator with back ground information for the convergence activity. The Divergence level could be a good indicator of the to be expected difficulty of the following convergence task. To monitor the convergence process, new measures have to be developed. The use of tags to enable measurement could be helpful, however making tags is distracting the participant from the main task brainstorming. This imposes a barrier to creativity and is therefore not desirable.

Using the facilitator As already encountered a lot of methods heavily rely on the skill of the facilitator for convergence. The task of proposing a selection, finding redundant concepts, summarizing and abstracting concepts is given to the facilitator in these cases. When the facilitator is capable of doing all this, these methods will work perfect. Facilitation in general is difficult and requires experience and practice.

Graphical representations Nunamaker Jr. (1997) wrote an article about the needs for further research in the field of GSS’s. Regarding the process of convergence, dealing with large amounts of data and information in a GSS context he gave his vision. He expresses the need for textual and graphical views of information. This would help participants in the process of sense making, because they switch between the macro level and a specific level of detail. Besides switching between macro level and detailed level, participants should also be able to request more meta information of the concepts in the system, like time and order, content and subject. Nunamaker Jr. (1997) mentions that ‘categorization and classification of generated information are only part of the story of sense making’, this of course is true, but Nunamaker does not mention the step of reduction of the set of information. Nunamaker (1997) also argues that ‘search capabilities need to be extended beyond key word searches to include semantic, temporal and conceptual searches that enable individuals to quickly analyze and

Supporting convergence in groups

Overview and analysis of current methods for convergence

51

make sense out of a huge amount of information’. The vision of Nunamaker links with the field of Computer Supported Argument Visualization (CSAV). CSAV is primarily used for the solution of ill-structured problems, according to Kirschner, Buckingham Shum and Carr (2003).

Dialogue mapping & Causal mapping Dialogue mapping is a facilitated approach ‘for modelling the dialogue a group has when discussing complex decisions. It seeks to encourage dialogue to support collective enquiry into a problem situation and develop shared understanding among participants and shared commitment to an appropriate way forward’ (Montibeller, et al., 2006). The elements of a dialogue map are visualized in Figure 6.6. In his book on dialogue mapping J. Conklin describes that the facilitator is supposed to map the dialogue of the participants, using the notation and symbols of Figure 6.6. To do this, the facilitator can use electronic tools, such as Compendium (Compendium Institute, 2009), various tools to support dialogue mapping are available (Conklin, 2006; Herrmann, 2009; Montibeller, et al., 2006). Dialogue mapping is useful for visualizing lines of argumentation on a certain topic. It also requires a different form of divergence. The participants do not enter text into a system directly, but this is done by the facilitator. The facilitator listens to the discussion and makes proposals time by time, to enter text into the dialogue map. The resulting map is presented to the participants in real time, using a beamer or screen.

Figure 6.6, Dialogue map elements, adopted from Picture it Solved (2006)

Dialogue mapping has some similarities with causal mapping. Both dialogue mapping and causal mapping ask three basic questions to the participants of a workshop; what should we do? why should we do it? And how should we do it? Differences lie in the time frame of use of a model; a dialogue map can be used for several meetings, where a causal map is used for one meeting only. This is mainly because a causal map aims to create a snap shot, an overview of thinking at one moment in time. Causal mapping supports multiple levels, this allows to create a big map. For both techniques some training on the set of symbols and modelling rules is required. Both methods rely heavily on the skills of the facilitator to operate a system and tool, to structure discussion and to manage resources. Herrmann and colleagues have used dialogue mapping in a series of experiments and came up with heuristics for the design of workshops. The heuristics are based on interviews and observations. Not surprisingly Hermann et al. acknowledge the problem of dealing with large amounts of information after a brainstorming (divergent) activity. In their paper a number of

Supporting convergence in groups

Overview and analysis of current methods for convergence

52

heuristics was listed for the design of the convergence process (Herrmann, 2009). According to Herrmann et al. the particular difficulty is that ‘reducing the set of ideas by prioritizing them has to take place without excluding valuable contributions or possibilities of merging them’ (Herrmann, 2009). The solution is continuous and evolutionary documentation that has to accompany the convergence process. This could be done in the following ways:

• Using the content of the information and the structure of collaboration to identify correlations, clusters and threads (Herrmann, 2009).

• Clustering and documentation of relationships should happen at the same time

• Managing and monitoring the attention of the participants. Neglected or conflicting items should be brought to the attention.

• Convergence should be traceable, by using collaborative documentation (Herrmann, 2009).

• Dialogue mapping should be used to document decisions regarding prioritizing (Herrmann, 2009).

Hermann et al. suggest the use of mind maps (form of dialogue mapping) to support the processes of grouping and clustering of information.

Causal mapping is similar to dialogue mapping. However there is an emphasize on the causal structure underlying the arguments.

Participant-driven GSS Helquist et al. (2007) mention that ‘However, as mentioned previously, this converging is a difficult problem and groups often resort to facilitators to expedite the process. The facilitator represents a considerable bottleneck in the process as the work is for practical purposes conducted serially, instead of in parallel’. To solve this problem of facilitator dependence, Helquist and colleagues (2007) developed a concept named participant driven GSS (PD-GSS), ‘PD-GSS seeks to leverage the skills and abilities of each group member to reduce the burden of, or dependence on, the facilitator’, Helquist et al. (2007). To enable convergence in a participant driven way, the process is split up in modules to reduce cognitive load and complexity. Each module is developed in such a way that it is independent and that the tasks and objectives are clear to the participants. This also enables participants to work from different locations and at different times on the same task, as was one of the objectives in Helquists research. Figure 6.7 visualizes the described process elements and briefly explains the meaning of each of the concepts. To date, no experimental data from workshops with the PD-GSS are available (Helquist, et al., 2007). The modules are clear and participants should be able to execute these steps, without any problems. Interesting to know would be how much time it would take for a distributed and asynchronous group to complete such a process, compared to a group that is working at the same time and same location.

Supporting convergence in groups

Overview and analysis of current methods for convergence

53

Figure 6.7, Participant-driven GSS convergence process elements, (Helquist et al. 2007)

6.4.1 Summary of findings from literature review: convergence methods To increase readability, the results from the above mentioned part of the literature review will be summarized here shortly.

• The use of tagging during brainstorming enables measurement of performance related metrics. No results about the impact of tagging itself on group creativity and performance are available yet. Tagging distracts participants from the creative task at hand.

• Causal mapping and dialogue mapping are two facilitated approaches that deliver graphical representations of a group’s collaboration activity. Both approaches focus around three questions. These questions are: What should we do? Why should we do it? And how should we do it? Both approaches do not use parallel inputting of concepts or provide anonymity to the participants and are largely dependent on the facilitator for the creation of the graphical representations. The creation of a causal and dialogue map structures the discussion around a set of questions. This reduces noise and distraction, but also limits creativity.

• The Participant-Driven GSS (PD-GSS) approach is an approach that focuses on enabling GSS supported collaboration for distributed and asynchronous teams. PD-GSS has split up the convergence process in six simple steps. Benefits of this process are the good ability of the facilitator to monitor the process and the structure provided for the participants. This approach has some similarities with the FocusBuilder thinkLet. No results from the application of PD-GSS with real groups are available yet.

Supporting convergence in groups

Overview and analysis of current methods for convergence

54

6.5 Common elements from the thinkLet database, the IAF method database and literature review

From the reviews of the thinkLet database, the IAF method database and the literature review factors can be identified that influence the effectiveness and/or success of the execution of a convergence pattern of collaboration. This summary gives an answer to the third sub question of this thesis, ‘what factors contribute to the success and effectiveness of executing a reduce and clarify pattern of collaboration in a GSS context?’.

Analyses of the elements of which the methods are composed revealed that all methods use a plenary way of working, a combination of a plenary and parallel way of working or use a third party to execute the convergence task. A parallel way of working increases speed, is scalable and lowers the dependence on a facilitator. On the other hand a parallel way of working is a risk for the comprehensiveness of the converged artefact. A plenary way of working is relatively slow, not scalable and asks much effort from the facilitator.

Much of the methods use voting as a method for making a (pre-) selection of concepts. The voting process itself takes place in parallel, since all participants are asked to vote on each concept in parallel and the voting results are calculated automatically. A voting process is less dependent on the skills of the facilitator, but is not very fast nor scalable when a large number of concepts is under consideration.

All methods found that foster the creation of shared understanding do this by means of a plenary presentation and discussion of the concepts under consideration. Facilitating a discussion requires more efforts from a facilitator than hosting presentations.

For dealing with redundant concepts a number of solutions were found. In the PD-GSS approach participants are asked to identify redundant items and other participants are asked to verify that. Chen et al. (2007) argue for a text-based search function for a facilitator to quickly search for theme’s or subjects within a set of concepts. Most thinkLets and IAF methods rely on the skills of the facilitator and the group to identify and remove redundancies. This is a demanding task for a facilitator.

6.6 Opportunities for improving convergence Based on the findings describe above and the inspection of methods for convergence, opportunities can be identified to complete the methods database. We define and describe the opportunities in this section, but first illustrate them using an example. This is a first step in answering the question ‘How can convergence processes become more successful and effective?’, because opportunities for improvement are described.

As an example to present the opportunities for improvement a creative problem solving workshop is chosen. As is shown in Figure 3.3 on page 12 a creative problem solving process consists of three general steps; (1) problem preparation, (2) idea generation and (3) solution evaluation. Between steps two and three the need for convergence arises. The need for convergence follows from different factors. Parallel to what Dennis, Wixom and Vandenberg (2001) describe for general policy making workshops, the team has, after the problem formulation phase, brainstormed for different solutions that have the potential to solve the problem. Brainstorming has been supported by a GSS. The convergence method(s) should comply with the characteristics of a creative problem solving

Supporting convergence in groups

Overview and analysis of current methods for convergence

55

collaboration process. Further it is assumed that the team that wants to execute the collaboration process is willing to use a GSS as platform to support their collaboration process. Chapter 6 has revealed a large number of different methods to enable convergence. The methods vary in input, output, way of working, the supported patterns of collaboration and technology dependency. Therefore all methods found have different scores on the metrics that are discussed in chapter 5. The next sections will elaborate on the characteristics that the convergence method needs to comply with.

Brainstorm artefact One element of the input for the convergence task is the set of brainstormed creative solutions. This set of solutions can be characterized by using the attributes from the BrainstormArtefact class presented in chapter 5. For this example it is assumed that redundancy is present, the level of refinement is below 100% and that shared understanding does not exist yet. The size of the brainstorm artefact is assumed to depend on the unknown number of participants to the workshop.

Task description The goal of convergence in the collaborative problem solving workshop is to enable evaluation of the brainstormed set of possible solutions. This goal can be characterized by using the attributes of the TaskDescription class presented in chapter 5.

TaskDescription attribute (metric) Value Level of redundancy 0% Level of reduction < 100% , comprehensiveness = 100% Level of refinement Actionable solutions (SMART), 100% Level of comprehensiveness 100% Level of shared understanding Maximize Table 6.6, goal for convergence in creative problem solving

Table 6.6 lists the metrics associated with the TaskDescription class together with the value for the convergence task within the creative problem solving workshop. Ideally we want all redundancies to be removed from the brainstorm artefact and want to achieve the greatest level of reduction possible with the constraint that the comprehensiveness is 100%. Further all solutions should be formulated in an actionable way and every participant should understand the meaning all concepts in the converged artefact.

Context, Participants and Facilitator The attributes of the Context, Participants and Facilitator class presented in chapter 5 have the following values in this example. It is the ambition of the team and facilitator to minimize the time spend on convergence, there is a GSS available, all participants are part of an interdisciplinary team, have collaborated before and no large hierarchical differences are present. For the group size three different scenarios are considered, a small group (<8 participants), a medium sized group (8-20 participants) and a large group (>20 participants). All participants are familiar with the use of computers and GSS software. Regarding the facilitator skill level 2 different scenarios are considered for this example; an experienced facilitator and a relatively inexperienced practitioner with domain knowledge.

Selecting a method for the creative problem solving workshop Based on the characteristics on the convergence task for a creative problem solving workshop described above, it is possible to select suitable methods from chapter 6. The selection is made,

Supporting convergence in groups

Overview and analysis of current methods for convergence

56

based on experiences with the use of the methods described in literature and the classification that is made in chapter 6.

From the description of the convergence thinkLets and IAF methods the candidate methods for convergence in a creative problem solving workshop are selected and listed in Table 6.7.

Facilitator skill level

Gro

up s

ize

High Low < 8 FastFocus. FastFocus, BucketBriefing, FastHarvest.

8 – 20 FastFocus, FocusBuilder, BucketBriefing, FastHarvest.

FocusBuilder, BucketBriefing, FastHarvest

>20

FocusBuilder, BucketBriefing, FastHarvest. Combination of a selection thinkLet (BroomWagon, GoldMiner & Pin the Tail on the Donkey) with one of the above mentioned thinkLets.

FocusBuilder, BucketBriefing, FastHarvest. Combination of a selection thinkLet (BroomWagon, GoldMiner & Pin the Tail on the Donkey) with one of the above mentioned thinkLets.

Table 6.7, candidate thinkLets for convergence within a creative problem solving workshop

The FastFocus thinkLet has proven to lead to reduction of the number of concepts, reduction of redundancy, a high level of comprehensiveness and an increase in shared understanding as is shown in research by Badura et al. (2009). The plenary way of working implied by this thinkLet leads to a high workload for the facilitator because of the discussion guidance he has to provide. During the discussion the facilitator also needs to keep track of redundancies and needs to propose alternative wordings of concepts. This makes the thinkLet less suitable for inexperienced facilitators. Furthermore the participants can only contribute to the discussion turn-by-turn, which leads to a longer execution time as the number of participants increases. The BucketBriefing, FocusBuilder and FastHarvest thinkLets have a high degree of similarity. All three thinkLets have an element of parallel working. For convergence, working in parallel means that the convergence task is divided into parts, by dividing the brainstorm artefact into parts. The participants are also divided into subgroups. Each part of the brainstorm artefact is then assigned to a subgroup of participants. The subgroups are asked to remove redundancy in their subset and to summarize and combine concepts where possible. Helquist et al. (2007) have shown the benefits of this way of convergence, a lower workload for the facilitator and a gain in speed. This makes these three thinkLets more suitable for larger groups and less experienced facilitators. Fruhling et al. (2007) warn for the inability of the facilitator to monitor the quality of the work that the participants are executing, combined with the fact of giving the right instructions. Davis et al. (2007) also mention the inability to monitor as a limitation of the thinkLets. The three thinkLets differ in the way in which the subsets of concepts are created, how many times they are used and in the way in which and when the subgroups are formed. All three thinkLets use a plenary presentation / discussion to foster the creation of shared understanding.

Opportunities for improvement This example shows that currently enough methods exist to support a convergence task in a GSS environment. However support for medium and large groups and support for less experienced facilitators can be improved. The support should focus on reducing the facilitator dependence, detection of redundant concepts and increasing the overall scalability of a method to support large groups.

To lower facilitator dependence and to increase scalability parallel working as described by the FocusBuilder thinkLet lead to positive results in previous case studies (Davis, et al., 2007). Problems

Supporting convergence in groups

Overview and analysis of current methods for convergence

57

with the current design of this method are the risk of losing critical concepts in the first round and the inability for the facilitator to monitor the process.

The removal of redundant concepts leads to a reduction and increases shared understanding, since discussion is needed. To reduce facilitator dependence and facilitator workload the detection task should not be given to the facilitator. Options are the participants as described in the PD-GSS approach (Helquist, et al., 2007) or automation as suggested by Chen et al. (2007).

To increase scalability for large groups it is beneficial to first make a quick pre-selection before starting a more detailed reduction and clarification step, an example of this is described by Briggs et al. (2003). Within the thinkLet and IAF database methods are present to support this type of pre-selection. However, when every participant has to consider every item in the set, either by voting, placing checkmarks or moving them, these methods are time consuming and therefore not scalable.

Therefore opportunities to make useful contributions to the method database concentrate in at least four directions:

1. Removing the task of detecting redundant concepts away from the facilitator to lower his/her workload.

2. Improving the current hurdles that exist when converging in a parallel way, as is suggested by the FocusBuilder thinkLet among others. The current hurdles include:

a. Securing comprehensiveness b. Improving the ability for the facilitator to monitor the process

3. Creating a scalable and fast pre-selection method. 4. Better support for facilitators with a low skill level to manage a convergence process in a

large group.

Supporting convergence in groups

Design: support for convergence

59

7 Design: support for convergence In the previous chapter four opportunities for improving support for the convergence pattern of collaboration are identified. This chapter describes the efforts to design artefacts to benefit from the opportunities and therefore adds to the answer of the question ‘How can convergence processes become more successful and effective?’. Because of the nature of the identified opportunities three artefacts are designed. To address the first opportunity (removing the task of detecting redundant concepts away from the facilitator to lower his/her workload) we present the design of a technique to automatically detect redundant concepts in a brainstorm artefact. To address the second opportunity (improving the current hurdles that exist when converging in a parallel way, as is suggested by the FocusBuilder thinkLet among others) we present a change in the design of the FocusBuilder thinkLet. This can be regarded a modifier to the thinkLet (Kolfschoten & Santanen, 2007). To address the third opportunity (creating a scalable and fast pre-selection method) we present the design of a new method that enables a group to make a pre-selection of concepts in a fast and scalable way. The method is designed as a new thinkLet. We have chosen for a thinkLet because thinkLets are reusable and are already used by a large body of facilitators and practitioners. An addition to the thinkLet database can therefore be adopted directly. Further we believe that all three artefacts address the fourth opportunity. The chapter is divided into three main parts. The first part focuses on a technique for the detection of redundant concepts and includes three iterations of a design. The second part describes the design of a pre-selection thinkLet and the third part proposes changes to the existing FocusBuilder thinkLet.

7.1 Detection of redundant concepts Based on the findings of the previous chapter it is assumed that the automatic detection of redundant concepts will decrease cognitive load, decrease facilitator dependence and will increase speed. Also Herrmann (2009) describes these potential benefits of automatic detection of redundant concepts. Possibly also the level of refinement and level of shared understanding can be influenced when a GSS ‘knows’ what are similar concepts. To be able to detect redundant concepts, a technique is needed. This section introduces a technique for the detection of similarities in texts and the changes that were needed to use the technique on a brainstorm artefact. The section concludes with a description of the accuracy of the technique and situations where and how it can be used.

Previous research has focussed on automatic classification of GSS output, see for instance Orwig et al. (1997) and Chen et al. (1994). In these studies similar concepts were identified and clustered simultaneously. The goal here is to design a technique that is able to detect redundant concepts only. The main reason for this is to lower complexity. Different techniques exist to detect similarities and redundancies on a textual basis. In software engineering, latent semantic analysis (LSA) is very popular, see for instance Dessus (2009). LSA makes use of an occurrence matrix to match pieces of text and has possibilities to give particular words a weight. Another technique to detect redundant concepts is normalized word vectors (NWV), see Dreher and Williams (2006). Current applications of this technique are to detect plagiarism in texts, see Dreher (2007), for automatically grading of essays, see Parker, Williams, Nitse and Tay (2008) or for classification of texts, see Williams and Dreher (2004). The NWV technique is capable of dealing with unstructured texts, this makes it able to deal with brainstorm artefacts. NWV’s are vectors in a large dimensional space that can be used to represent the content of a document, according to Parker et al. (2008). The basis for NWV is a

Supporting convergence in groups

Design: support for convergence

60

thesaurus, the vector is created by comparing the words of a document with the thesaurus. The coordinates of the NWV are then calculated by counting the number of times each word in the document occurs in the thesaurus. Comparison of the documents then occurs by calculating the angle between the vectors (Parker et al., 2008). An example of NWV’s can be found in Appendix L.

7.1.1 Using Normalized Word Vectors for the detection of similarities in a brainstorm artefact

Within plagiarism detection or automatic grading NWV’s are applied at document level. To detect similarities within a brainstorm artefact all individual concepts need to be compared. Therefore a NWV needs to be calculated for every concept separately. To assess the feasibility of this technique for use on sentence level, instead of document level, a number of evaluations were executed.

For the first test a brainstorm artefact containing 20 concepts was selected. First the redundant concepts were identified manually and the identified redundancies were discussed with a second coder. Within the brainstorm artefact 15 pairs of similar concepts were identified. Next NWV’s were constructed by building a small thesaurus in MS Excel and comparing all words in the brainstorm artefact with the words in the thesaurus, this resulted in a set of 20 152-dimensional vectors. Matlab was used to calculate the matrix of angles between the vectors. By comparing the manually constructed matrix with the NWV angle matrix the accuracy of the technique could be assessed. In three design iterations 11 out of the 15 redundant concept pairs were detected, 4 concept pairs were falsely recognized as redundant. More details on the results found and the changes made in the design iterations are described in Appendix M. Based on this result we concluded that it is possible to use NWV’s as a technique to detect redundant concepts in a brainstorm artefact. Attention should be paid to the exclusion of common words when comparing the words of the concepts with the thesaurus, this conclusion is also found in the research by Chen et al. (1994). The false positives found were due the presence of words that can be interpreted in more than one way. Because NWV only takes single words in consideration, this will always be a limitation. Large differences in the length of sentences also influence the angles in a negative way.

To use NWV within groups the matrix containing the similarity angles should be available in a fast way. For the three design iterations describe above, on average 6 man hours per iteration were needed to create the thesaurus, assemble the vectors and calculate the angles between the vectors. This calls for automation of the creation of the vectors and calculation of the angles between the vectors. In cooperation with the owner of the Dutch thesaurus website synoniemen.net (2006) digital access was gained to a Dutch thesaurus containing 130,000 entries. From the ‘Corpus Gesproken Nederland’ (CGN) a frequency-ordered list of 2,000 common words was obtained. CGN is a Dutch – Flemish research project in which, among others, spoken language is coded and words are extracted to count frequencies (Nederlandse Taalunie, 2004). The original CGN list is reduced to 848 entries for use as a common word filter. The reduction was necessary since not all words on the list were considered meaningless. An overview of all entries can be found on the CD-ROM on the last page of this thesis. Software to remove the common words from the brainstorm artefact, compare the remaining words with the thesaurus and to create the vectors was developed by TeamSupport. To calculate the angles between the vectors the numerical computing environment MATLAB was used. This led to a considerable reduction in calculation time, on average 10 minutes were needed to produce the similarity matrix for a 60-concept brainstorm artefact.

Supporting convergence in groups

Design: support for convergence

61

To assess the performance three original brainstorm artefacts were used, the artefacts contained 87, 56 and 63 entries. First the three artefacts were inspected manually to identify redundant items. A coder examined all possible relations between the concepts in a matrix, the coder classified each possible relationship as no-link (0), similar (1) or equal (2). The results were checked by and discussed with a second coder. Next the similarity matrix for each of the three artefacts was automatically calculated, the result of which are three matrices (87x87, 56x56 and 63x63) containing the calculated angles. Based on the experience from the evaluation described above all angles between 00 and 700 were considered to indicate that the two corresponding concepts are equal, angles between 700 and 800 indicate similarities and angles between 800 and 900 indicate no link between the corresponding concepts. Based on this classification the NWV result can be compared with the manual result by comparing nine categories. These categories are represented in Table 7.1.

Manual result NWV result Category 0 (no link) 0 (no link) A 0 (no link) 1 (similar) B 0 (no link) 2 (equal) C 1 (similar) 0 (no link) D 1 (similar) 1 (similar) E 1 (similar) 2 (equal) E 2 (equal) 0 (no link) F 2 (equal) 1 (similar) G 2 (equal) 2 (equal) G Table 7.1, mapping differences between manual coding and NWV into categories

7.1.2 Accuracy of detecting redundant concepts Comparing the manual coding effort with the automatically calculated angles revealed that NWV scores high in detecting ‘equal’ relations, represented by ‘2’ in the coding scheme. For all three artefacts used, these scores are 83%, 75% and 100%. For detecting the similar relations, represented by ‘1’ in the coding scheme, the scores a far less high. Using NWV it is possible to detect roughly 50% of these similarity relations (42%, 49% and 48% resp.). Overall the percentage of matches regarding the detection of similarities (similar and equal) is 45%, 51% and 53% resp. NWV scores high in the correct detection of no relationship between concepts 97%, 96% and 87%. Besides that using NWV saves considerable resources, compared to manual detection of similarities. The results of the comparison are visualized in Table 7.2.

Artefacts / Categories

Green-ICT [# links]

Energy transition [# links]

Shared space [# links]

A 3533 1408 1678 Correct (no link) B 103 47 189 False positive C 15 16 27 False positive

D 47 33 27 similarity not detected

E 34 32 25 Correct (similarity detected)

F 1 1 0 Euqualness not detected

G 5 3 5 Correct (equal detected)

Green-ICT [%] Energy transition [%] Shared space [%]

No link (% correct) 96.8 95.7 88.6 % of correct detected no-links vs. all no-links

Link (% correct) 44.8 50.7 52.6 % of correct detected links vs. all links

Table 7.2 comparison of automatically versus manually detected similarities

Supporting convergence in groups

Design: support for convergence

62

7.2 Design of a pre-selection thinkLet To better support large groups in the beginning of a convergence task a fast and scalable pre-selection method is needed. This section addresses opportunities three and four as identified in the previous chapter. A pre-selection can be made to quickly make a first selection, based on which further, more in depth, reduction and clarification follows. Therefore the most important aspects of a method for a pre selection are speed, scalability and removing all off-topic and non-sense making concepts. To speed up the process, not every participant rates every concept. It should be adjustable how many participants rate each concept, but in general 4 is expected to be sufficient, based on experiences from Faieta et al. (2006). Evaluation with groups should show whether this really is the case. The idea is that to let the GSS decide based on the average voting score and standard deviation whether a concept is to be in- or excluded from the pre-selection or whether more votes are needed. When more (or less) than 4 votes per concept are collected, the threshold values for the average and standard deviation change accordingly. An analysis as is described in Appendix N can be used to determine the threshold values for the mean and standard deviation. The selection is made by using a 5-point scale, ranging from non-critical (not important) to critical (important). A 5-point scale is used because of the level of detail it offers, this detail can be used in other activities (e.g. creating sub sets of concepts or structure of a discussion). Secondly because less votes than the number of participants are collected per concept, the participants are offered more time to express their vote. To further save time in presenting the results of the voting process, the system automatically divides the brainstorm artefact into four categories:

1. High usefulness (critical) and low standard deviation. - The group agrees that these concepts are useful to solve the problem at hand.

2. High usefulness (critical) and high standard deviation - Most participants think that these concepts are instrumental to the group goal.

However the high standard deviation indicates that some participants do not share this opinion. Discussion or more votes are needed to reveal the exact meaning of the concepts.

3. Low usefulness (non critical) and high standard deviation - Most participants think that these concepts are not instrumental to the group goal.

However the high standard deviation indicates that some participants do not share this opinion. Discussion or more votes are needed to reveal the exact meaning of the concepts.

4. Low usefulness (non critical) and low standard deviation - The group agrees that these concepts are not useful to solve the problem at hand.

The group can disregard the concepts that ended up in the fourth category (low average and standard deviation). The concepts in categories 2 and 3 can be subjected to a second vote by more participants (therefore a second round of voting is needed), or can be used by the facilitator as input for a short plenary discussion. The concepts of the first category should definitely be included in the further convergence steps. The method is outlined in general in Figure 7.1.

Supporting convergence in groups

Design: support for convergence

63

Figure 7.1, speeding up the voting process, Divide&Conquer thinkLet

In the next paragraphs the idea will be further presented. Aspects of the idea that will be explained are (1) using the average voting score and its standard deviation to automatically select concepts to continue working with and (2) speeding up the selection by not letting all participants rate every concept. Instead only sub groups of participants rate sub sets of the concepts. As an example it is assumed here that every concept is rated by 4 different participants only. The only constraint is that an individual participant does not rate its own concept(s), to limit strategic behaviour.

7.2.1 Scenario To further introduce the idea, in this paragraph a scenario will be sketched of the use of this selecting technique.

A group of more than 8 people has brainstormed on a topic (so far there seems to be no constraint on the kind of topic for the brainstorm). The number of brainstormed concepts is at least (preferably six or more) equal to four times the number of participants that have contributed to the brainstorm. To achieve the goal of the workshop, it is necessary to reduce the number of concepts under consideration. In other words, the group wants to continue with a selection of the concepts. The group only wants to select those concepts to continue with that they find promising or critical to achieve their goal / solve their problem. The group also wants to quickly make a first (rough) selection of the concepts, to save time for a second round of more detailed analyses of the value of the brainstormed concepts. Or because a rough selection is the end deliverable of their workshop. To enable this, the facilitator and the GSS offer the opportunity to rate each concept on a simple five-point scale. To gain even more time in the selection procedure, initially each concept is only rated on by four participants only, instead of rated on by the entire group of participants. Based on the average rating score and its standard deviation, the GSS decides whether or not to keep the concept

Supporting convergence in groups

Design: support for convergence

64

in the system. The facilitator sets the minimum and maximum values for the average and standard deviation for the concepts that are to be included. The rating occurs in two rounds, of which the second one is not always necessary. In the first round, every concept is rated by four participants. The GSS then decides which concepts are to be kept in the system and which can be disregarded. For items of which the rating is indecisive (high standard deviation), a second round of voting is needed. In the second round all participants (except the ones that already have rated the concept) rate the selected concepts. It is assumed that after this second round, the GSS is able to classify all concepts and is therefore able to select the concepts that the group deems worthy of paying further attention to. The GSS presents this list of concepts to the facilitator, he can now decide how to proceed with this list. In the eventual case that also the second round of voting was indecisive for some concepts, the GSS informs the facilitator about this. The facilitator can then decide what to do, his options are (1) to include them, (2) to not include them, or (3) to decide about this based on a group discussion.

7.2.2 Details of the method In this section the details of the method regarding the division of concepts and selection rules for standard deviation and average value are elaborated upon.

Dividing the concepts for voting To support this method concepts need to be divided among the participants. It is expected that this approach will only work in homogeneous groups where power and knowledge are distributed evenly. Also to limit strategic behaviour participants should never be asked to give a vote for a concept that they added to the brainstorm themselves. Since it is not possible to estimate the number of concepts in the brainstorm artefact in advance, it will not be always possible that every participants rates the same number of concepts. Suppose a group of 10 participants has brainstormed 56 concepts. The facilitator wants every concept rated 4 times, in total 224 votes are then needed. This can be realised when 4 people vote 23 concepts and the other 6 rate 22 concepts. Also the constraint that a participant is not allowed to rate his own concept can lead to a situation where the amount of votes to give is not equal between the participants. The number of desired votes per concept cannot be larger than the number of participants, otherwise participants will be asked to rate the same concept more than one time, which of course makes no sense. 0 gives an example of how a GSS could code the different concepts in order to make sure that the desired amount of votes per concept is given, no participants vote on their ‘own’ concepts and participants are not asked to give e.g. the first and second vote for the same concept.

5-point scale and 4 votes per concept The proposed scale to use is a 5-point Likert scale. Reasons for this are that the scale should have a middle value to accommodate the votes of people who feel that they cannot assess the usefulness of a concept. Further the scale should not be too complicated, because this would slow down the voting process. Therefore a 5-point scale is chosen; a middle point and (symmetrically) 2 values to indicate usefulness and no usefulness. A 5-point scale does allow participants to detail their choose. The scale is visualized in Figure 7.2. A discrete 5-point scale allows for 625 different combinations of scores to be made, when the system collects 4 votes per concept only. Of this 625 possible combinations, 70 are unique, see Appendix N, for a detailed overview. For the evaluation it is chosen to collect 4 votes per concept initially.

Supporting convergence in groups

Design: support for convergence

65

Figure 7.2, 5-point voting scale

When using a discrete 5-point-scale and 4 votes per concept the standard deviation of the average voting score per concept ranges from 0 (when all votes are equal) to 2.31 (2 votes of 1 and 2 votes 5). Table 7.3 lists some interesting combinations of voting scores, together with the according value for the standard deviation and average. For full details, see Appendix N.

# Vote 1 Vote 2 Vote 3 Vote 4 Average St.Dev. 1 1 1 1 1 1.00 0.00 2 5 5 5 5 5.00 0.00 3 1 1 1 1 1.25 0.50 4 1 1 2 2 1.50 0.58 5 2 2 3 4 2.75 0.96 6 1 1 1 3 1.50 1.00 7 1 1 3 3 2.00 1.15 8 1 1 2 4 2.00 1.41 9 5 5 2 4 4.00 1.41 10 1 1 1 4 1.75 1.50 11 1 1 4 4 2.50 1.73 12 1 2 3 5 2.75 1.71 13 1 2 4 5 3.00 1.83 14 1 1 2 5 2.25 1.89 15 1 1 3 5 2.50 1.91 16 1 1 5 5 3.00 2.31 Table 7.3, average and standard deviation for 4 votes on a 5-point scale

From Table 7.3 it is clear that only the combination of the average value and the standard deviation allow making sense of the voting score. The rows 1 to 4 show the standard deviations for voting combinations where there is no difference or less than two points difference in individual scores. Based on this a standard deviation of less than or equal to 0.58 can be regarded by the system as consensus about the average score of the vote for the concept. Rows 11 to 16 show combinations where there are large differences in individual votes. The corresponding standard deviations are in the range of 1.71 to 2.31. The system should classify this as non-consensus about the average score of the vote for a concept. Then there remains a gray area of standard deviations, the range between 0.58 and 1.71, corresponding with the rows 5 to 10 in Table 7.3. Consider for instance rows 6, 7 and 8, 9; based on the standard deviation value and the two rules defined above, it is not possible to classify these three hypothetical concepts. If the average voting value is also taken into account, it is not difficult to see that the participants, who voted on this concept, do not consider it critical and do not want to spend further attention on it. Especially in the case of example 6 and 7 all participants voted below or equal to three, meaning they do not find the concept interesting / critical enough. Example 8 and 9 present a different story; here at least one participant voted above the threshold value of three, meaning that he or she finds the concept interesting and critical to the group goal. In the case of example 9 it could be said that based on the high average value, this concept should be included in the next round. In the case of example 8 ideally a revote (by more participants) or a short group discussion should reveal the real value of the concept. To establish a set of guidelines and to find out the threshold values for the average and standard deviation of the voting scores, all possible

Supporting convergence in groups

Design: support for convergence

66

outcomes (all possible combinations of voting values) have been coded into three categories. The categories are (1) consensus, select (2) consensus, do not select and (3) no consensus, revote (or discuss). The coding has been done by the author of this thesis, and is based on experience. Therefore it is very likely that other facilitators will use different values. The value used in this thesis are determined by looking at all possible combinations of votes, see the table in Appendix N, per combination it was decided whether it meant accept, decline or revote. The decision is made by the author based on the four voting scores, and is therefore biased. All combinations were then plotted in a graph, see Appendix N, to derive the set of rules, presented in Table 7.4.

Category Average St.Dev Average St.Dev Average St. Dev. 1 > 3.00 ≤ 2.00 - - - - - - 2 ≤2.25 ALL AND ≤2.50 ≤1.29 AND ≤2.75 ≤0.50 3 ALL OTHERS Table 7.4, classification rules

The threshold values should be adjustable for every different workshop dependent on the goals, demands of the group and facilitator preferences. It is not desirable that concepts are divided into two categories, therefore a graphical representation of all voting combinations is helpful when making the rules.

Name of the method: Divide&Conquer Because of the division of concepts among the participants for voting, the thinkLet will be named Divide&Conquer. This name was suggested by the first supervisor of this thesis, Stephan G. Lukosch.

After the selection After the first and second round of voting the outcome is presented in a categorized overview. The order per category is according to the average value. Since this method only enables a pre-selection, it is likely that another method is needed to arrive at the final, desired version of the converged artefact. For large groups and inexperienced facilitators a method is advised that uses parallelism for further convergence, because this lowers cognitive load and facilitator dependence and is very scalable. A method that uses parallelism is presented below, in section 7.3.

7.3 Modification of the FocusBuilder thinkLet This section describes a change in the design of the FocusBuilder thinkLet and describes the use of similarity detection within the execution of this thinkLet. Hereby we address the opportunities for improvement three and four, as described in the previous chapter. Both proposed changes are a modifier to the thinkLet (Kolfschoten & Santanen, 2007). To speed up the process of convergence, after brainstorming or after the group has made a quick selection as described in the previous section, working in parallel instead of converging on a list of concepts in a plenary discussion has a great potential to save time, reduce cognitive load and reduce facilitator dependence, as is exemplified by Badura et al. (2009) and Davis et al. (2007). Further working in sub groups is more scalable than a plenary way of working. This way of convergence lead researcher to the development of the FocusBuilder and FastHarvest thinkLets, see section 6.1 for more details. The basic idea being that in rounds groups of participants create non redundant and summarized lists of concepts, based on a list of concepts offered to them (by the GSS). By allowing these groups combine their lists in successive rounds, a final converged version of the brainstorm artefact is created. The modified approach is visualized in Figure 7.3.

Supporting convergence in groups

Design: support for convergence

67

7.3.1 Scenario To further detail this method for convergence, a scenario of its usage will be sketched here. A group has a set of brainstormed concepts. They may have made a first quick selection on this set, but the set still contains redundant and ambiguous concepts. Also the value of the concepts with respect to the group’s goal (instrumentality), are not clear yet. The goal of the group is to remove all redundancy and to make sure that every group member understands the meaning of every concept (creating shared understanding). This could serve as input for further elaboration or selection of concepts, based on instrumentality. The group does not want to lose time in doing this and therefore wants to benefit from parallel working, one of the characteristics of working in a GSS. The facilitator creates groups of participants. An even number of groups is needed. Suppose in this case that the facilitator creates four groups. Each of the groups gets a part of the concepts on their screens and is asked to remove redundant concepts and to make a summary of the concepts that are on their screen. To do so the GSS offers facilities to change text and wording, to combine concepts and to use some formatting options, like colours and bold writing. The summary that the group makes should be based on a group discussion. When a group is finished, they can submit their new list of concepts to the GSS.

Figure 7.3, modified FocusBuilder thinkLet.

The deliverable of this first round is four lists of concepts. When at least two groups are finished, the GSS notifies the facilitator. He can now combine these two groups and ask them to combine their two lists into one new list. Next these two groups sit together and work on combining their two lists of concepts. To do so the GSS provides a view in which the two lists are visible and an editing field and target list is visible. The second round of this activity is thus formed by teaming up the four

Supporting convergence in groups

Design: support for convergence

68

groups into two groups and asking them to combine their two lists of concepts. The deliverable of this round is two lists of concepts. The final goal is to combine these two list of concepts into one final list, representing all non redundant and summarized concepts that the group has started with. This last round can be guided by the facilitator, the GSS supports this round by showing the two lists at the same time, together with a target list. This target list contains all concepts in the combines form and server as end deliverable for this activity. The list can serve as input for a next activity or as conclusion for the workshop.

7.3.2 Details of the method

Forming subgroups The facilitator determines the formation of the subgroups, the only hard constraint on this is that the total number of subgroups needs to be even. The facilitator can use his personal knowledge on the background of the participants to form the subgroups. Based on this he could choose to group together participants with different backgrounds and different hierarchical positions, this is likely to prolong the time that the participants need to work in the subgroups, but will lead to interesting discussions, with arguments from different perspectives, resulting in shared understanding. When participants with a more similar background are grouped together, it is likely that differences of interpretation and perceived instrumentality come to light during the final round of the activity.

Dividing the concepts As many subsets of concepts as there are sub groups of participants need to be created. A constraint is that there always needs to be an even number of subsets of concepts. The subsets of concepts can be created completely at random (a concept only appears in one sub set) or with the use of similarity information, for example normalized word vectors. When the latter technique is used, two options are available. The facilitator can choose to let the GSS group similar looking concepts into the same subset, this is likely to increase speed, but could reduce the comprehensiveness. The facilitator could also choose to let the GSS group similar looking concepts into different subsets, this probably invokes more discussion in the final round but is likely to increase comprehensiveness.

Multiple rounds: combining sub sets Based on the number of subgroups of participants that is created two or more rounds of parallel group work are needed. When there are four subgroups, 2 parallel rounds and one final round are needed. When there are eight subgroups, 3 parallel rounds and one final round are needed. In the case of six subgroups, 2 parallel rounds and 1 final round are advised. After the first round, the third group needs to be split and is to be combined with the other two groups before continuing with the second round. Two options are available to select the subgroups that have to combine their lists in the second and third round. This could be done random, based on the time at which the subgroups finish their work of the first round. The first two groups that finish are selected to combine their two lists. Another options is to use similarity detection between the created new lists of concepts of the four groups. The lists that have the highest degree of similarity are to be combined then. The GSS needs to be able in this case to quickly calculate the similarity scores between the lists and show these to the facilitator. The facilitator then can, based on this information, decide which lists have to be combined.

Supporting convergence in groups

Design: support for convergence

69

The final round To create the final list of concepts, two lists of concepts need to be combined. This happens in a facilitated discussion in which the two lists of concepts are presented and where needed concepts are combined. This last round makes sure no critical concepts are left behind and fosters the creation of shared understanding. The output of the method is a list of on-topic concepts without redundancies and shared understanding. Any other evaluation, organization or consensus building thinkLet can be used after this thinkLet.

7.4 Summary of the designed artefacts This chapter has described the design of three artefacts to make use of the opportunities that are identified in section 6.6 of the previous chapter. We presented a technique for automatic detection of similar and redundant concepts. With this technique the detection task of redundant concepts is moved from the facilitator and participants to the GSS. This lowers the workload for the facilitator and participants and potentially speeds up the convergence process. The technique is based on Normalized Word Vectors (NWV), evaluation has showed that compared to manual detection of redundant concepts NWV is able to detect 50% of the redundant concepts.

To remove the hurdles that exist when converging by using a scalable parallel way of working, we proposed a modification to the FocusBuilder thinkLet. The modification aims to improve the comprehensiveness of the end result created by using this thinkLet. In field evaluation should reveal whether this really is the case. Another weak point of the FocusBuilder thinkLet is the inability of the facilitator to guide and/or monitor the process. We try to tackle this hurdle with a new set of GSS capabilities that are described in the next chapter.

We have designed a new thinkLet, Divide&Conquer, that enables a group to quickly make a pre-selection of concepts after a brainstorm activity. The thinkLet uses a voting process in which the number of votes collected per concept depends on the average and standard deviation of the voting score that is obtained by asking four participants to vote a concept. When all four voting scores are high and have a low standard deviation, the concept is included in the pre-selection. When all four votes are low and have a low standard deviation the concept is excluded from the pre-selection. When all four votes are moderately high or low with a high standard deviation, more votes are collected to determine whether the concept needs to be in- or excluded from the pre-selection. In this way we try to minimize the number of votes per participant, which should lead to a decrease in workload and increase in time and scalability.

In the next chapter we will outline the capabilities that a GSS needs to support a group in executing the Divide&Conquer and modified FocusBuilder thinkLet.

Supporting convergence in groups

GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

71

8 GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

In the previous chapter a new thinkLet and a modifier for an existing thinkLet are described. In this chapter the question ‘What capabilities must a GSS offer to support a successful and effective convergence pattern of collaboration?’ is answered. We limit the description of capabilities to the capabilities needed for executing the Divide&Conquer thinkLet and modified FocusBuilder thinkLet. The two thinkLets proposed in sections 7.2 and 7.3 put requirements on the GSS, that will be described in the following two sections. Support for the Divide&Conquer thinkLet has been realized within the TeamSupport GSS package.

8.1 Divide&Conquer thinkLet The GSS should allow for voting on a 5-point scale, like the one visualized in Figure 7.2. The standard default value for a vote should be three. The GSS should be able to collect the voting scores from multiple participants and should be able to show the results on a screen. The results should include: (1) the concept, (2) its average score and (3) the standard deviation of the average score. It should also be possible to reveal the exact individual submissions.

Dividing concepts for voting To speed up the voting process, each concept is rated by a subset of the participants, instead of by the entire group of participants. Therefore the GSS should be able to divide the concepts in such a way among the participants, that the concepts are only rated by the predefined number of participants. This number of participants to rate each concept should be adjustable. For the first experiment, four participants per concept will be evaluated. The GSS should divide the concepts in such a way that no participant gets to vote on its own concepts. 0 gives a possible way of doing this.

Automatic classification of concepts The GSS should be able to automatically classify the concepts after voting in three categories. These categories are:

1. Consensus, select and include in next activities. 2. Consensus, do not select. Disregard these concepts in next activities. 3. No consensus, revote or discuss. These concepts should be included in next activities, unless

the eventual revote indicates otherwise.

Two variables are important for the classification of the concepts. These are the average voting score and the standard deviation. It should be possible to let the GSS make the classification of the concepts, based on multiple rules per category. The facilitator should be able to adjust and/or redefine these rules. For the first evaluation a 5-point scale and 4 participants per concept will be evaluated. The rules that are defined for this, are based on a simulation in MS Excel, see Appendix N. In this case the set of rules is presented in Table 7.4.

Multiple rounds of voting As an option the GSS should offer the possibility to have a second round of voting, immediately after the first one. The concepts that are to be included in this second round are selected based on their average voting score and standard deviation. See Appendix N for an example. Participants should not have to vote on a concept twice, therefore the GSS should select the other participants for the

Supporting convergence in groups

GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

72

second round of voting than in the first round. It is not always possible, but preferably this still does not include the author of a concept.

Metrics To evaluate this way of selecting the following metrics need to be recorded by the GSS:

• A log file containing all concepts.

• A log file containing the voting settings (number of concepts / participant, total number of votes, the duration of the individual votes. At least the time between a vote was issued and when the vote was submitted should be recorded.)

• A log of how the GSS divided the concepts among the participants for voting, both for the first and second round.

• The division of concepts after the first round of voting and the division of concepts after the second round of voting.

• The individual voting scores.

• Execution times

8.2 Modified FocusBuilder thinkLet In this section the GSS requirements to effectively support the modified FocusBuilder thinkLet will be outlined.

Dividing concepts and subsets of concepts The GSS should be able to use NWV to select concepts for the subsets and should be able to this at random. The division of participants in sub groups is done by the facilitator, based on the distribution of power and knowledge in the group. Or when a similarity detection technique is available, concepts can be divided according to the calculated similarities.

Similarity detection using normalized word vectors NWV could be used to assist in the composition of the sub sets of concepts. Similar concepts could be grouped together, it is expected that this lowers the cognitive load of the participants. The aim is to create orthogonal subsets with respect to similarity. All similar concepts are present within the same subset. This allows a subgroup to focus on the key idea of a subset of concepts. Also NWV could be used to select the order in which the concepts of a subset are presented to the sub group of participants. A third option to use NWV is to assess the similarity of the subsets after the first round to determine which subsets should be combined in the second round.

GSS interface for parallel convergence: participants The screen on which the participants work when they are making the summary and removing redundancy should consist of three parts. On part (on the left) shows the list of concepts that is created by the system. To lower cognitive load, this list is not shown at once, but in portions of 7 -10 concepts at a time. On the right there should be a part with an empty list, this is where the new list is to be build. In the middle there is a section where the group of participants can edit, combine and reformulate concepts. The systems allows to drag-and-drop concepts through the three different parts of the screen.

GSS interface for parallel convergence: facilitator The facilitator interface includes the following elements:

Supporting convergence in groups

GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

73

• A small overview per team, showing the names of the participants and their progress. This progress is measured as the size of the initial list compared to the current state of the initial list. Also the level of convergence could be displayed by comparing the number of concepts on the initial list with the number of concepts on the target list.

• By clicking the overview of a team, the team’s target list, the ‘still-to-do-list (initial list) and the editing field is displayed.

• A message box to send instructions, either to all participants or only a specific team or teams.

• A similarity matrix of the target lists of the different teams

• Control buttons (go to next round, stop, etc.)

Routing and different versions of lists of concepts The GSS should be able to create, send and receive several versions of lists of concepts. An overview is given here. First of all there it the original list of concepts, that is divided into four parts, let’s say 1a, 1b, 1c, 1d. Each of these four is then send to a workstation, where a group of participants, transforms the list into a more converged form, let’s call these four lists 1af, 1bf, 1cf and 1df. These four final lists are then send back to the GSS. The GSS needs to save them, it is not desired that the original lists (1a, 1b, 1c, 1d) are overwritten. The next round consists of combining the four final lists (1af, 1bf, 1cf and 1df) into two lists in two groups, using two workstations. The GSS needs to be able to send two of the lists, let’s say 1af and 1bf to a workstation and the other two to another workstation. At both workstations, two groups of participants are combining the two lists into a new one. This means that 1af and 1bf are combined into 2a and 1cf and 1df are combined into 2b. The lists 2a and 2b need to be saved by the GSS. The last round of the activity consists of combining the lists 2a and 2b into a new one, let’s say list 3.

Metrics The GSS should log the initial list of concepts and the final list of concepts for every subgroup in each round separately, including the times it took to create the lists.

8.3 Scrum For the development of the version of TeamSupport that can be used to evaluate the two new thinkLets, the agile and iterative development method Scrum is used. The first thing to do is to define a product backlog. The backlog typically contains product features, functionality, infrastructure, architecture and technology work (Schwaber & Beedle, 2001). The backlog is based on two user stories; one for the Divide&Conquer thinkLet scenario and one for the FocusBuilder thinkLet scenario. The two scenarios are described in section 7.2.1 and 7.3.1. The functionalities are divided into facilitator and participant functionalities. Before describing the functionalities the intended lay out and appearance of the GSS was designed. Because the TeamSupport platform is used, the mock-up is based on screenshots of the current TeamSupport version. The mock-up’s are displayed in Appendix Qand Appendix U and are based on the current lay-out of TeamSupport. Several iterations were needed to arrive at the final backlogs. Unfortunately development time was limited and is a costly resource; therefore it was not possible to develop the support for the FocusBuilder thinkLet. The Scrum backlog for the development of the support module for the Divide&Conquer thinkLet and FocusBuilder thinkLet can be found in Appendix K.

Supporting convergence in groups

GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

74

Scrum result The result of the development effort can best be assessed by looking at the end result. Appendix U, Figure 8.1 and Figure 8.2 contain screenshots from the finished software. Due to limitations in software development resources and project time, it was only possible to develop software support for the Divide&Conquer thinkLet. Figure 8.1 shows the participant view of the voting module in a Firefox web browser. The top of the screen shows the voting scale, this is a way to communicate the scale and its meaning to the participants. The scale is fully customizable by the facilitator. Below the picture of the scale all concepts are listed and a way to rate each concept is presented on the right of each concept. Figure 8.2 shows the voting result screen in an Internet Explorer web browser, this screen is always visible for the participants and can be pushed to the participants when desired. The screen shows all concepts and their voting scores. On the top right of the screen a table shows an overview of the number of concepts that are classified in the three categories. From this screen the facilitator can start a second round of voting or he can copy the concepts to a new module.

Architecture evaluation Divide&Conquer thinkLet To demonstrate that the TeamSupport software has the correct functionality to execute the thinkLet a workshop was executed. Carlsson (2006) argues that any information system should always be evaluated in the environment where it is intended to be used. Therefore a small workshop was organised with a group of 9 students. They brainstormed on a simple topic (ways to reduce the costs of driving a car and ways to make living more environmental friendly) and used the system to select the most promising cost reduction ideas and most promising ways to increase the environmental friendliness of living. Brainstorming for both tasks happened according to the FreeBrainstorm thinkLet. For the first task, the team brainstormed 52 ideas in 9 minutes, for the second task the team brainstormed 63 ideas in 12 minutes. In the next step both tasks were different. For the first task all participants were asked to rate all ideas on a five-point scale. The criterion used was usefulness. For the second task the newly developed TeamSupport module for the Divide&Conquer thinkLet was used, which allowed to collect only four votes per concepts initially and the possibility to issue a second round of voting on the ideas with a high standard deviation and low or average values for usefulness. The participants were able to successfully express their votes in both the first and second round of voting. The TeamSupport development team was present to monitor the system’s behaviour and detected no irregular behaviour. It can therefore be concluded that the system operates in accordance with the design as specified in the Scrum backlog and can be used for further evaluation. Because of the general nature of the task used and the low commitment of the participants to the ‘problem’, the results are not discussed in detail.

Supporting convergence in groups

GSS requirements for the Divide&Conquer and FocusBuilder thinkLet

75

Figure 8.1, TeamSupport Divide&Conquer module, voting

Figure 8.2 TeamSupport Divide&Conquer module, voting results

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

77

9 Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

To make an assessment of the performance of the developed Divide&Conquer thinkLet, the modified FocusBuilder thinkLet and the similarity detection technique, evaluation is needed. Page 111 lists evaluation methods mentioned by Hevner et al. (2004), together with comments from research by Davis et al. (2007). The two thinkLets and the technique for similarity detection need to be executed with a group to evaluate its performance. The performance will be evaluated based on a set of performance metrics, both process and result oriented metrics. Functional (black-box) testing is chosen as the method for evaluation, because this approaches a real-life situation. Evaluation answers the question whether the developed artefacts help in making convergence processes more successful and effective. This chapter is structured as follows; first the evaluation goals and tasks will be introduced, then in sections 9.3, 9.4 and 9.5 we present the results of the three evaluation workshops. The conclusions based on the three evaluation workshops are presented in section 9.6.

9.1 Evaluation goals The goal of evaluating these methods is to determine the performance of the two thinkLets in terms of the dimension described in chapter 5. The main focus is on acceptance, satisfaction, facilitator dependence, ease of use, speed and scalability of the methods, because these relate directly to the opportunities of improvement that we have identified in chapter 6.

Metrics To assess the performance of the two new thinkLets and the technique for similarity detection the result oriented and process oriented metrics described in chapter 5 will be used.

Number of evaluation workshops Preferably the number of evaluation workshops is as high as possible, because then the statistical power of the evaluation increases into the significant range. This is beneficial, because then it is more easy to predict, based on the results achieved. Unfortunately time and other resources, like participants, are limited. Within the timeframe of this graduation project, it is possible to prepare and execute three workshops for evaluation. Within the field of GSS (process oriented) research it is not uncommon to use a limited number of evaluation workshops and present the results. Examples of this are given by Badura et al. (2009) who used one single workshop to present the performance of a thinkLet for convergence, Bragge et al. (2007) who used one workshop to evaluate the performance of 6-hour workshop design and Harder and Higley (2002) who used three workshop to evaluate the design of an entire collaboration process.

Participants To evaluate the methods using the workshop design described in the previous paragraph groups of participants are needed. In cooperation with the relevant stakeholder a more detailed design of the workshop is made. This is described in the next section.

9.2 Evaluation tasks To evaluate the methods, it was feasible within the time available to execute three workshops. Appendix P describes the detailed designs for the three workshops, Appendix Q gives an overview of the detailed agenda of the workshops, summaries are presented below.

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

78

All three workshops were scheduled within the Bachelor curriculum of the faculty of Technology, Policy and management of Delft University of Technology, within a third-year course on Policy, Economics and Law. The topics of the workshops were actual and relevant topics within the field of study of the course. For the ICT students within the course, the workshop was on Green-ICT, for the energy students the workshop was on energy transition and for the transport students the workshop was on the concept of shared space. All topics and the workshop designs were found relevant for the students and the course and were made in close cooperation with the relevant teachers. The first workshop (Green-ICT) makes use of the changed FocusBuilder thinkLet (without similarity detection), the second workshop (energy transition) uses the Divide&Conquer thinkLet and evaluates the participants’ reaction to similarity detection. The third workshop (shared space) uses the Divide&Conquer thinkLet. In the three sections below we will present the outline as well as the result and process oriented metrics of the three workshops.

9.3 Green-ICT workshop The next sections will describe the execution of the workshop and will present the collected evaluation data.

9.3.1 Workshop execution The workshop was not completely finished, because the available amount of time proved to be too short. Therefore the workshop was ended after agenda point 6 (brainstorming for relevant actors). Convergence on the list of actors was done in a next lecture. The brainstorm and organize functions of GroupSystems’ ThinkTank were used to present the participants with a platform to brainstorm and execute the changed FocusBuilder thinkLet. In the opinion of the facilitator the system worked fine, however some participants complained about long response times from the server and in the 90 minutes some error messages and disconnection of users occurred. The facilitator blamed this on the wireless infrastructure, however later inspection revealed that the ThinkTank server was not connected in the right way to the switch, causing these problems. The workshop was executed by a group of 13 bachelor students in the technology, policy and management domain with a specialisation in ICT. All participants were males, with ages between 20 and 25. All participants had little or no working experience.

9.3.2 Workshop evaluation During the workshop a questionnaire was used to measure the process oriented metrics, coding of the generated data was used to quantify the result oriented metrics. The results were discussed with the participants in a follow up meeting.

Process oriented evaluation After the first part of the workshop, the participants were asked to fill out 16 questions. These questions measured commitment to the process, satisfaction with the results and satisfaction with the process. One open question could be used to write down general comments and remarks. In Table 9.1, the values for commitment with the process (CP_1), satisfaction with the process (SP_1) and satisfaction with the result (SR_1) are visualized. 13 participants filled out this part of the questionnaire. All questions could by answered by means of a 7-point Likert scale. Although all three values are above the neutral point (4) on the scale, they are not very high. This can be explained by looking at the commitment level of the students, which is also relatively low. Analysis showed that students that gave a low value for commitment also gave low values for satisfaction. This conclusion

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

79

is also found in previous research by Duivenvoorde (2008). When the four cases with a commitment level below four are removed, the satisfaction levels increase a little and the standard deviation significantly drops (SP_1: 5.7 (std. dev: 0.7) and SR_1: 4.7 (st. dev: 0.8)).

CP_1 SP_1 SR_1

N Valid 13 13 13

Missing 0 0 0

Mean 4.8 5.2 4.5

Std. Deviation 1.5 1.1 1.1

Minimum 1.4 2.8 1.8

Maximum 6.6 6.8 6.0 Table 9.1, green-ICT workshop: commitment and satisfaction for the first part of the workshop

Four participants used the open question to give some extra remarks, two of them indicated that the system was less clear, because it had no auto-scroll function. One mentioned that this was the worst GSS package he had ever encountered and another mentioned that the system was slow, instable and lacking overview. All comments concentrated on technical aspects of the execution of the workshop, given their background in ICT this can be explained. In a follow up meeting these comments were discussed. Indeed all dissatisfaction arose from technical issues of the ThinkTank software, the laptops used and the wireless internet connection. Considering the input part of the input – process – output model it is clear that neglecting the factor ‘background of participants’ (see Figure 5.2) lead to this results. In a sense it is logical that ICT students will put emphasise on technical factors, like screen size, stability and performance of a software tool. After the second part of the workshop (the brainstorm on actors) the participants again filled out the questionnaire. This time also commitment with the results, efficiency and productivity were included in the questionnaire. The values are listed in Table 9.2.

SP SR CP PR EF CR

N Valid 12 12 12 12 12 12

Missing 1 1 1 1 1 1

Mean 4.7 4.4 5.1 5.2 4.3 4.4

Std. Deviation 1.4 1.3 1.3 1.0 1.3 0.9

Minimum 2.2 2.0 1.6 3.4 2.4 3.0

Maximum 6.2 7.0 6.8 7.0 6.2 6.0 Table 9.2, green-ICT workshop: commitment. Satisfaction, productivity and efficiency for the entire workshop.

This part of the questionnaire also had several open questions for comments on the process that was followed in the workshop and the quality of facilitation of the workshop. When regarding the comments made (Appendix S, table 1 and 2), they seem to be made by another group of participants than the one present in the workshop. Based on the comments it is concluded that the only effect of the facilitation intervention on the group process was a positive one. Further only participant #11 commented on the process in a negative way by stating that he found it (a little) useless, (the same participant commented that he found the results ‘funny’). This comment more reflects his commitment to the subject than the quality of the process itself. It is therefore concluded that technical issues influenced the process evaluations of this workshop in a negative way. This

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

80

conclusion is strengthened by the answers to another open question, ‘What do you think of the results of the workshop?’, visualized in Appendix S table 2.

Result oriented evaluation For the first part of the workshop, the brainstorm artefact and the convergence artefact are coded. The convergence artefact consists of multiple versions, since convergence occurred in three rounds. The two brainstorm artefacts (the ones from the first and second brainstorm task) and the three versions of the convergence artefact (after round 1, round 2 and round 3) are coded using the coding scheme presented and validated by Badura et al. (2010). The coded data and raw data from the brainstorm and convergence tasks are present on the CD-ROM on the last page of this thesis. A summary of the coded data and the derived metrics is presented in Appendix R.

Speed The initial brainstorm for factors concerning green-ICT was completed in 12 minutes. The following three round of convergence lasted 9, 11 and 12 minutes respectively. The time needed for convergence is three times the time needed for brainstorming (speed ratio). The relative short time spend on brainstorming can be explained by good preparation from the participants and a fit between the topic and their interests.

Level of reduction The final achieved level of reduction, in terms of raw concepts was 36%. The first round led to the greatest reduction (30%) the next round had a reduction level of 10% and in the third round some divergence occurred (-2%). The divergence in the last round was due to the addition of some comments to clarify the meaning of two concepts. In terms of disaggregated concepts the reduction rate was 40% in total. In the factor brainstorm artefact some off-topic concepts were present (14%), these were eliminated after the first round of convergence.

Reduction RC [%] RCD [%] total 36% 40% round 1 30% 36% round 2 10% 9% round 3 -2% -3% Table 9.3, green-ICT workshop, level of reduction

Level of redundancy Initially the factor brainstorm artefact only had 54% of unique concepts (disaggregated). In the three rounds of convergence this increased to 84% of unique concepts. The largest increase in unique concepts was in the first round of convergence.

Redundancy RC-on [%] RC-off [%] RCD-Unique [%] RCD-Redundant [%] Brainstorm factors 86% 14% 54% 46% FocusBuilder round 1 100% 0% 77% 23% FocusBuilder round 2 100% 0% 75% 25% FocusBuilder round 3 100% 0% 84% 16% Table 9.4, green-ICT workshop, level of redundancy

Level of shared understanding After coding the factor brainstorm artefact only 58% of the concepts was considered unambiguous. This increased to 96% after the three rounds of convergence. The three rounds had a more or less equal share in this increase.

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

81

Shared understanding RC-U [%] RC-A [%] RDC-U [%] RDC-A [%] Brainstorm factors 58% 43% 61% 39% FocusBuilder round 1 79% 21% 81% 19% FocusBuilder round 2 80% 20% 85% 15% FocusBuilder round 3 96% 4% 97% 3% Table 9.5, green-ICT workshop, level of shared understanding

Level of comprehensiveness After selection of the on-topic and unique concepts and identifying matches between the brainstorm artefact and convergence artefact, 8 concepts remained that had no direct match. These were discussed in the follow up lecture and with the teacher. As a definition of critical ‘contributes to the formation of a broad overview of green-ICT’ was chosen. The concepts from the factor brainstorm artefact that are on the candidate list for critical concepts, but that do not have a direct match with a concept from the convergence artefact are presented in Table 9.6.

# Concept Critical? Reason 1 energiezuinig N Too high level 2 duurzaamheid N Too high level

3 Groene energie bestaat niet Y Could have led to other interesting solutions

4 gebruiktsmiliue vervuiling vs productie vervuiling Y Life cycle is important

5 ICT als drijfveer voor hernieuwbare energie installaties Y Ambiguous, but could have been interesting in discussion

6 dat je verschillende levensfasen hebt in hardware levenscyclus

Y Life cycle is important

7 ICT draagt wel evenveel bij aan CO2 uitstoot als de luchtvaart industrie

N Background, high level

8 Stroom besparen door delen van internet infrastructuur uit te schakelen in daluren

Y Interesting solution direction

Table 9.6, green-ICT workshop, expert opinion on the criticalness of 8 brainstorm concepts

The brainstorm artefact contains 54 candidates for critical concepts, 8 of which do not have a direct match with a concept from the convergence artefact. A domain expert believed 5 of those 8 concepts are critical to the group goal of creating an overview of factors important for green-ICT. The level of comprehensiveness is 94%.

Level of comprehensiveness [%] RC

# candidates in bs 54 # 'no direct matches' 8 expert opinion 5

level of comprehensiveness 94%

Table 9.7, green-ICT workshop, level of comprehensiveness

Level of refinement A discussion with the teacher and inspection of the brainstorm and convergence artefact revealed that in both artefact different levels of detail are present. Some participants have mentioned general aspects within the field of green-ICT, others have mentioned solution directions. But also very concrete solutions were present in the artefacts. According to the teacher all of which contribute to the formation of the broad picture of green-ICT. This is also reflected in the variable RC-on. Regarding the level of refinement it is concluded that in this particular case not one specific level is desired.

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

82

9.3.3 Limitations Besides the fact that the data only reflects one single workshop, the workshop was executed by bachelor students. Although the students were highly motivated and cooperated nicely during the workshop, low values for commitment were found. Later in-class discussion revealed the probable cause for this. The due-date of the first draft version of their paper was four weeks after the workshop, the students therefore considered the workshop to be too early in the programme. Further we have to remark that although we concluded that the facilitation effort did not influence the result in a negative way, it may have influenced the evaluation in a positive way. We believe that a facilitator always influences a group by definition. The participants (neither of the three workshops) did not know that the workshop was part of a graduation project until after the workshop and all evaluation questionnaires were filled out and handed in.

9.4 Energy transition workshop The next sections will describe the execution of the workshop and will present the collected evaluation data.

9.4.1 Workshop execution 16 students, the teacher and 3 observers from TeamSupport were present. The agenda was slightly changed; an extra brainstorming exercise was added before all other activities. The extra brainstorm question was to come up with factors or aspects of the energy transition. The goal of this exercise was to warm-up the students. The students were divided into four groups, right from the start of the workshop. For the remainder of the workshop the original plan and agenda was followed. Where possible the teacher participated in the process. After the break, one student did not come back to the workshop. In the following lecture this student was not available for comments. Due to technical issues, the GSS was not immediately available when activity 9 was started. The delay was 10 minutes. All four groups were able to make and present their strategy; the quality of the presentations and the presented strategies was high. In the follow up lecture some of the students mentioned that one of the observers several times gave his opinion to the students on their presentations. This was not appreciated by the students and some of the students mentioned that they incorporated this feeling in the satisfaction questions in the questionnaire.

Figure 9.1, Energy transition workshop impression

The next sections describe the process and result oriented evaluation of this workshop.

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

83

9.4.2 Energy transition workshop evaluation First the process oriented evaluation will be described by giving and interpreting the data from the questionnaire. Next the result oriented evaluation is given, by describing the data from the coding of the workshop output.

Process oriented evaluation The questionnaire had several open questions for comments on the process that was followed in the workshop and the quality of facilitation of the workshop. The questionnaire was divided into four parts to enable measurement of satisfaction for different parts of the workshop separately. The first two brainstorm and convergence activities on the topics of context (not in the agenda, added on the day of the workshop) and goals are measured separate. For this part of the workshop satisfaction with the result (aSR) and satisfaction with the process were measured (aSP). The data is shown in Table 9.8.

aSP aSR bSP bSR

N Valid 17 17 17 17

Missing 0 0 0 0

Mean 4.9 4.2 5.0 4.1

Std. Deviation 1.1 1.1 1.2 1.2

Minimum 2.3 1.7 2.3 1.7

Maximum 6.3 5.7 6.3 5.7 Table 9.8, energy transition workshop, satisfaction with process and result

For the second part of the workshop, brainstorming and pre-selecting goals, also satisfaction was measured (bSP and bSR), see Table 9.8. Finally, after the presentations commitment with the result (dCR) and process (dCP), efficiency (dEF) and productivity (dPR) were measured. The data is shown in Table 9.9.

dCP dPR dEF dCR

N Valid 13 13 13 13

Missing 4 4 4 4

Mean 5.4 5.4 4.9 4.9

Std. Deviation 0.6 0.7 1.2 1.0

Minimum 4.0 4.0 2.3 3.7

Maximum 6.3 7.0 7.0 7.0 Table 9.9, energy transition workshop, commitment, efficiency and productivity

The participants’ scores for satisfaction all on the positive side of the scale, but all just above the neutral point of four. Contributing to this may be the, also relatively low score for commitment to the process. Part 2 and 3 of the workshop focused on brainstorming a list of possible instruments with the entire group, also with the entire group a pre-selection of the instruments was made. Next the group was split up into 4 sub groups. Each group was given a copy of the list of (pre-) selected instruments. The copy was available in the TeamSupport software, which made it easy for the students to combine and/or rephrase and delete instruments. For groups 1 & 2 the order in which the instruments were presented, was based on the automatically detected similarities, for groups 3&4 the order was random. For the groups 1 & 2 satisfaction (cSPg1&2, cSRg1&2) and the effect of

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

84

the use of similarity detection were measured (cNWVg1&2). For the groups 3&4 the same metrics were measured, satisfaction (cSPg3&4, cSRg3&4) and effect of the use of similarity detection (cNWVg3&4). See Table 9.10 for an overview of the data.

cSPg1&2 cSRg1&2 cNWVg1&2 cSPg3&4 cSRg3&4 cNWVg3&4

N Valid 8 8 8 6 6 5

Missing 9 9 9 11 11 12

Mean 4.3 4.7 5.4 4.5 4.9 4.4

Std. Deviation 1.4 1.0 0.9 1.4 1.0 1.0

Minimum 2.3 3.0 4.5 2.0 3.3 3.3

Maximum 6.7 6.0 6.8 6.0 6.0 5.5 Table 9.10, energy transition workshop, satisfaction and the use of NWV for the instrument part (2 & 3)

The difference between the groups 1&2 and 3&4 is that groups 1&2 received the pre-selected list of instruments ordered according to the calculated similarities. Similar items were placed together in the list, using the NWV technique. Groups 3&4 received the list in a random order. The four questions that measured the construct NWV were:

1. Ik vond het eenvoudig om dubbele instrumenten te vinden in de lijst I found it easy to identify double instruments in the list

2. Ik vond dat aan elkaar gelijke instrumenten dicht bij elkaar stonden in de lijst I found that similar instruments were close to each other in the list

3. Ik vond dat het vinden van dubbele instrumenten snel ging Identifying double instruments went fast

4. Het kostte mij weinig moeite om orde te creëren in de lijst met instrumenten It was easy to create order in the list of instruments

In Table 9.10 the scores on these four question are averaged. Table 9.11 shows the individual results. Clearly the groups that are using NWV ordered sets of instruments score higher.

Question Groups 1&2 Groups 3&4 1 5.8 4.2 2 5.0 4.6 3 5.6 4.6 4 5.3 4.2 Table 9.11, energy transition workshop, NWV question scores

Unfortunately, due to the small group size, no statistical significant results are available yet. The scores for satisfaction with process and result for this part of the workshop are more or less equal for all four groups, the groups that used the randomly ordered set score 0.2 higher for both satisfaction constructs. The open questions in the questionnaire could be used by the participants to express their comments on the process and facilitation (Appendix S table 3), their perception of the result (Appendix S table 4) and the selection method & software (Appendix S table 5). Based on the comments in it is clear that facilitation did not influence the results in a negative way. Also the process was accepted and found useful and interesting by the participants. One participant (#5) expressed a concern regarding the ordering and prioritization of the outcomes. The same participant (#5) answered that he/she considered the ordering to be fast and unambiguous in the question asking about the result of the workshop. This seems to be a bit controversial. A next open question asked the participants how they felt about the result of the workshop, the results of this question are copied into Appendix S table 4. Regarding the result (Appendix S table 5) some deviation is found in the results. Most of the comments are considered to be positive or moderately positive. Some

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

85

concerns are expressed regarding the time that was available, this is understandable since it was a very full agenda. Overall it can be concluded the result of the workshop reached the intended quality level. A last open question allowed the participants to enter some final remarks. All remarks were about the pre-selection method and software used. Three comments are directed to technical aspects of TeamSupport (# 1, 2, 16), these issues should be fixed in a new release of the software. Comment #2 also concerns the use of anonymity in the workshop. Regarding the Divide&Conquer thinkLet some participants are very positive. They like the speed of the thinkLet and like the fact that it filters out the non-sense in a quick way. Other participants express their concerns about the representativeness of four votes per concept and the way in which declined concepts are controlled.

Result oriented evaluation The next six sections will elaborate on the values found, regarding the results of the workshop. The coded data and raw data from the brainstorm and convergence tasks are presented in Appendix R.

Result oriented evaluation for the energy transition workshop For the energy transition workshop, the brainstorm artefacts and the convergence artefacts are coded. The coded data and raw data from the brainstorm and convergence tasks are present on the CD-ROM on the last page of this thesis. An overview of the coded data and the derived metrics is presented in Appendix R. Below we will introduce the values found.

Speed Table 9.12 displays the time that was needed to finish the different parts of the workshop. All times are measured from the beginning till the end of an activity. Since the time needed for introduction and instructions will vary per group and facilitator, these times are excluded from the table.

Execution times in minutes Time [min] Speed ratio

Context Brainstorm 8

Contest Pre-selection, round 1/1 8 1.0

Goals Brainstorm 6

Goals Pre-selection, round 1/1 8 1.3

Goals selection 20 4.7

Instruments Brainstorm 10

Instruments Pre-selection round 1/2 5 0.5

Instruments Pre-selection round 2/2 5 1.0

Instruments selection Group 1 60 7.0

Instruments selection Group 2 60 7.0

Instruments selection Group 3 60 7.0

Instruments selection Group 4 60 7.0

Table 9.12, energy transition workshop, execution times of the different activities

For the brainstorm times, compared with the output (RC), no irregular values are found.

The gain in time by using the Divide&Conquer thinkLet can be expressed by looking at the number of votes per participant, compared to a traditional voting method, using the same 5-point scale. In a traditional voting, regardless of the voting scale, the number of votes per participant is equal to the number of concepts in the brainstorm artefact, which is 68 in this case of the first brainstorm. By using the Divide&Conquer thinkLet, (68 * 4) / 17 = 16 votes per participant are collected. This results

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

86

in a considerable reduction in the amount of time needed for voting and participants do not feel overwhelmed by a large amount of votes to cast. The latter may lead to less attention to the voting process and randomly placed votes.

Level of reduction In Table 9.13 the reduction levels are given for the different artefacts. For the context and goal brainstorm, the reduction levels are calculated with respect to the number of accepted concepts in the first (and only) round of pre-selection voting. A reduction level of 49% and 32% respectively was found. For the instruments brainstorm several reduction levels are calculated. One between the brainstorm artefact and selection after the first round of pre-selection and one between the brainstorm and second round of pre-selection. We observed that the level of reduction after the second round (57%) is slightly lower than the ratio after the first round (64%). This indicates that some of the concepts that were categorized as ‘unsure’ after the first round ended up in ‘accepted’ after the second round. The last 8 rows in Table 9.13 show the reduction levels for the four different groups with respect to the original brainstorm artefact and the pre-selection made after the second round. It can be observed that the groups 1 and 2 (both used the NWV-ordered list of concepts) achieved higher reduction rates (68% & 72% and 27% & 36%) than groups 3 and 4 (which were given randomly ordered lists) (57% & 62% and 2% & 13%). With respect to the pre-selection made after the second round of voting, groups 3 and 4 hardly achieved any reduction.

Reduction [%] RC [%] RCD [%]

Context (brainstorm - accepted) 49% 42%

Goals (brainstorm - accepted) 32% 31%

Goals (brainstorm - final selection) 86% 88%

Instruments (brainstorm - accepted round 2) 57% 56%

Instruments (brainstorm - accepted round 1) 64% 64%

Instruments (brainstorm - group 1) 68% 67%

Instruments (brainstorm - group 2) 72% 72%

Instruments (brainstorm - group 3) 57% 57%

Instruments (brainstorm - group 4) 62% 62%

Instruments (pre-selection round 2 - group 1) 27% 25%

Instruments (pre-selection round 2 - group 2) 36% 36%

Instruments (pre-selection round 2 - group 3) 2% 2%

Instruments (pre-selection round 2 - group 4) 13% 13%

Table 9.13, energy transition workshop, reduction ratios

Level of redundancy Table 9.14 shows the calculated redundancy levels, based on the number of on-topic and off-topic raw comments (RC) compared to the total number of raw comments and based on the number of unique and redundant disaggregated comments (RCD). For the context brainstorm 96% of the concepts were on-topic and 28% of the brainstormed concepts redundant. The accepted part of the brainstorm artefact, after the pre-selection, contained only on-topic concepts (100%) and consisted for 22% of redundant concepts. For the goal brainstorm similar values are found, 93% of the brainstormed concepts is on-topic and 24% of the concepts is redundant. After the first and only round of pre-selection voting 98% of the accepted concepts is on topic and 20% is redundant. After the selection, based on a plenary discussion all concepts are unique and on-topic. The instrument

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

87

brainstorm artefact contained 81% on-topic concepts and 26% of the concepts were redundant. After the first round of voting, all concepts in the accepted category were on-topic and 38% of the concepts in this category were redundant. To further narrow down the selection a second round of voting (on the ‘unsure’-categorized concepts) was issued. After this round the accepted category still only contained on-topic concepts, but 40% was redundant. From this point the group was divided into four sub groups. The four groups were given a list (digital within the TS software) containing all concepts that were categorized ‘accepted’ after the second round of voting. The assignment was to further clean up the list and extract instruments to be used in the presentation. For groups 1&2 the list was ordered according to the calculated similarities, for groups 3&4 the list was ordered randomly. The artefacts created by the four groups all only contained on-topic concepts. Groups 1&2 removed much of the redundancies, a decrease from 40% to 2% and 3%. This decrease was not observed for the groups 3&4.

Redundancy [%] RC-on [%] RC-off [%] RCD-Unique [%]

RCD-Redundant [%]

Context Brainstorm 96% 4% 72% 28%

Contest Pre-selection round 1/1 Total 96% 4% 72% 28%

Accepted 100% 0% 78% 22%

Declined 89% 11% 76% 24%

Unsure 93% 7% 47% 53%

Goals Brainstorm 93% 7% 76% 24%

Goals Pre-selection round 1/1 Total 93% 7% 78% 22%

Accepted 98% 3% 80% 20%

Declined 82% 18% 83% 17%

Unsure 88% 13% 64% 36%

Goals selection 100% 0% 100% 0%

Instruments Brainstorm 81% 19% 74% 26%

Instruments Pre-selection round 1/2 Total 81% 19% 74% 26%

Accepted 100% 0% 63% 38%

Declined 61% 39% 85% 16%

Unsure 93% 7% 74% 26%

Instruments Pre-selection round 2/2 Total 81% 19% 74% 26%

Accepted 100% 0% 60% 40%

Declined 67% 33% 85% 15%

Instruments selection Group 1 100% 0% 98% 2%

Instruments selection Group 2 100% 0% 97% 3%

Instruments selection Group 3 100% 0% 62% 38%

Instruments selection Group 4 100% 0% 63% 37%

Table 9.14, energy transition workshop, redundancy levels

Level of shared understanding Shared understanding of the created artefacts is visualized in Table 9.15. The assessment of shared understanding is made by looking at the occurrence level of ambiguity among the concepts. The Divide&Conquer thinkLet, by definition does not change the level of shared understanding or level of ambiguity, since the formulation of concepts is not changes and no or limited room for discussion is present. This is reflected by the numbers as well. The context brainstorm contained 90%

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

88

unambiguously formulated concepts. A slightly higher value, 94%, is found in the accepted category after pre-selection voting. Indicating that the participants rated the concepts that were formulated unambiguously higher than the ones that they rated ambiguous. For the goal brainstorm similar results are observed; 85% unambiguous in the brainstorm artefact and 88% in the selected category after voting. After the selection of goals, based on a discussion, all goals are formulated in an unambiguous way. Regarding the instruments brainstorm, 67% of the concepts in the created artefact were formulated in an unambiguous way. After 2 rounds of voting, 86% of the concepts in the accepted category were formulated in a unambiguous way. Regarding the group work, all four created artefacts range from 90% to 100% of unambiguously formulated concepts.

Shared understanding [%] RC-U [%] RC-A [%] RCD-Unamb. [%]

RCD-Amb. [%]

Context Brainstorm 90% 10% 92% 8%

Contest Pre-selection, round 1/1 Total 90% 11% 92% 8%

Accepted 94% 6% 96% 4%

Declined 83% 19% 86% 14%

Unsure 87% 14% 84% 16%

Goals Brainstorm 85% 16% 88% 12%

Goals Pre-selection, round 1/1 Total 85% 16% 88% 12%

Accepted 88% 13% 90% 10%

Declined 82% 22% 83% 17%

Unsure 75% 29% 82% 18%

Goals selection 100% 0% 100% 0%

Instruments Brainstorm 67% 41% 68% 32%

Instruments Pre-selection round 1/2 Total 67% 41% 68% 32%

Accepted 89% 11% 90% 10%

Declined 50% 82% 52% 48%

Unsure 63% 40% 63% 37%

Instruments Pre-selection round 2/2 Total 67% 41% 68% 32%

Accepted 86% 14% 86% 14%

Declined 52% 71% 53% 47%

Instruments selection Group 1 90% 10% 91% 10%

Instruments selection Group 2 100% 0% 100% 0%

Instruments selection Group 3 91% 9% 93% 7%

Instruments selection Group 4 92% 8% 92% 8%

Table 9.15, energy transition workshop, level of shared understanding

Level of comprehensiveness It is difficult to assess the level of comprehensiveness, because the level of completeness of the brainstorm artefact is difficult to determine. For the context brainstorm the teacher, in the role of domain expert, concluded that all main elements of the field of energy transition were present in the accepted category. The same is true for the goal and instrument brainstorm. Therefore we conclude that there are no indications that comprehensiveness suffered.

Level of refinement Regarding the instruments brainstorm it can be concluded the brainstormed instruments were very of a very general nature. This is also stressed by the fact that all instruments mentioned in the four

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

89

presentations were elaborated versions of the instruments that were in the brainstorm artefact. Some participants complained about this, but this exactly matched the workshop design. The brainstorm was intended to give a background and starting point for the four (smaller) group discussions.

9.4.3 Limitations Besides the limitations that come with using Bachelor students for the workshop and facilitation, that we described in section 9.3.3, also some distracting factors were present during the workshop. Two of which are the presence of judging (not planned) external observers and the technical failure that led to a delay and distrust in the system. Further the students admitted not to have read the necessary documents in advance of the workshop; this is reflected in the general character of the results from the brainstorm tasks.

9.5 Shared Space workshop The next sections will describe the execution of the workshop and will present the collected evaluation data.

9.5.1 Workshop execution 17 participants, guest lecturer and principal lecturer (behind the same laptop) were present. First TeamSupport was used to brainstorm benefits of shared space and perform the Divide&Conquer thinkLet for convergence in 2 rounds on the benefits. Then a technical error occurred and it was no longer possible to use the TeamSupport software to finish the workshop. To replace TeamSupport ThinkTank by GroupSystems was used to brainstorm concerns. To make a pre selection of concerns 10 checkmarks for every participant were collected. The process oriented results reflect the entire workshop, including the interruption caused by the technical malfunction and switch to another GSS package, which took 20 minutes of time. The result oriented evaluation only reflects the results achieved in TeamSupport.

9.5.2 Workshop evaluation The workshop is evaluated using a questionnaire to measure the process oriented aspects and by coding the output data to measure the result oriented aspects.

Process oriented evaluation This part of the questionnaire also had several open questions for comments on the process that was followed in the workshop and the quality of facilitation of the workshop.

Table 9.16 shows the average values for the 6 metrics mentioned below. In the table the following abbreviations are used:

SP = satisfaction process SR = satisfaction result CP = commitment process PR = productivity EF = efficiency CR = commitment result

SP SR CP PR EF CR

N Valid 17 17 17 17 17 17

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

90

Missing 0 0 0 0 0 0

Mean 5.5 5.2 4.9 5.6 5.5 5.5

Std. Deviation 0.9 0.8 0.8 0.6 0.6 0.6

Minimum 2.6 3.6 3.2 4.4 4.0 4.4

Maximum 6.6 6.0 6.2 6.2 7.0 6.2 Table 9.16, shared space workshop, process evaluation

We conclude that the participants were satisfied with the process and result and also committed to the achieved results. Their perceived productivity and efficiency is also positive. The comments on process and facilitation, presented in Appendix S, confirm this. Based on the information in the table it can be concluded that the facilitation interventions did not influence the workshop in a negative way. Regarding the process we conclude from the remarks that most of the participants regretted the system failure and liked the first part of the workshop (using TeamSupport) better than the second part (using ThinkTank).

Regarding the result, the participants are satisfied as well, as can be concluded from the comments made by the participants (Appendix S table 6). Most participants think the result is good and meets expectations. A few wonder how to use the results. The general comments made (Appendix S table 7) support the conclusion of the energy transition workshop. Namely, that the presence of redundant items in the voting list is undesirable and distracting.

Result oriented evaluation for the shared space workshop

Speed The brainstorm for benefits of implementing the shared space concept took 10 minutes. The following two rounds of voting took 4 minutes. The final discussion in which the key benefits were selected was completed in 30 minutes.

Execution time in minutes Time [min] Speed ratio

brainstorm benefits 10

1st round voting 2 0.2

2nd round voting 2 0.2

final selection 30 3.2

Table 9.17, shared space workshop, speed

For this workshop the gain in time of using the Divide&Conquer thinkLet can be expressed by comparing the number of votes to be expressed per participant with the number of votes to be expressed when using a traditional voting method. When using a traditional voting method, regardless of the scale used, every participant has to express a vote for every concept. In this case, this would add up to 46 votes per participant. By using the Divide&Conquer thinkLet the number of votes per participant is reduced to 11 in the first round and 13 in the second round.

Level of reduction The total reduction level achieved, from brainstorm to final selection was 89%. The Divide&Conquer thinkLet achieved a 57% level of reduction after the second round. After the first round the reduction level was higher, 63%. This stresses the importance of executing the second round.

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

91

Reduction [%] RC [%] RCD [%]

total pre-selection (bs - 2nd round) accepted 57% 49%

first round (brainstorm - 1st round accepted) 63% 60%

final selection 89% 91%

Table 9.18, shared space workshop, level of reduction

Level of redundancy Initially 85% of the concepts in the brainstorm were on-topic and only 7% of the concepts were redundant. The accepted category contained 100% on-topic concepts in both rounds of voting. The redundancy level after the second round of voting in the accepted category was 4%. After the discussion the selection of benefits was 100% unique and on-topic. Again the ability of the Divide&Conquer thinkLet to quickly filter concepts is displayed by these numbers.

Redundancy [%] RC-on [%] RC-off [%] RCD-Unique [%] RCD-Redundant [%]

brainstorm benefits 85% 15% 93% 7%

1st round voting totals 85% 15% 93% 7%

accepted 100% 0% 96% 5%

declined 63% 38% 88% 13%

unsure 92% 8% 94% 6%

2nd round voting totals 85% 15% 93% 7%

accepted 100% 0% 96% 4%

declined 73% 27% 89% 12%

final selection 100% 0% 100% 0%

Table 9.19, shared space workshop, level of redundancy

Level of shared understanding Judging from the number of unambiguously formulated concepts, the level of shared understanding for the brainstorm artefact was 83%, the level increased to 100% after the discussion. After the second round of voting, as well as for the first round, the accepted category only contained unambiguously formulated concepts.

Shared understanding [%] RCU [%] RCA [%] RCD-unamb. [%] RCD-amb. [%]

brainstorm benefits 83% 17% 86% 15%

1st round voting totals 83% 17% 86% 15%

accepted 100% 0% 100% 0%

declined 56% 44% 56% 44%

unsure 92% 8% 94% 6%

2nd round voting totals 83% 17% 85% 15%

accepted 100% 0% 100% 0%

declined 69% 31% 69% 31%

final selection 100% 0% 100% 0%

Table 9.20, shared space workshop, level of shared understanding

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

92

Level of comprehensiveness There was no time and opportunity to discuss the results in depth with the guest lecturer; however she indicated that all main benefits, as identified in relevant literature, were present in the accepted category after the second round of voting (Marjan Haagenzieker, personal communication, march 9 2010).

Level of refinement Almost all benefits have the same level of abstraction and detail. This corresponds to the instruction given at the beginning of the workshop.

9.5.3 Limitations For this evaluation the same limitations apply as the ones described for the green-ICT workshop, see section 9.3.3 on page 82, since the workshop was executed in the same environment. A further limitation is a possible lack of commitment or sense of urgency by one or two of the participants, as indicated by their comments / remarks. Which of course is true, because, seen from a student perspective, this workshop was no prerequisite to passing the exam. On the other hand, the high score for commitment shows that most of the participants were committed to the workshop and its result.

9.6 Conclusion and discussion of the results of the three evaluation workshops

This section discusses all results and conclusions found in the three evaluation workshops. The first paragraph discusses the NWV-based similarity detection technique, the second the Divide&Conquer thinkLet and the last section discusses the FocusBuilder thinkLet. In all three paragraphs future research is described.

9.6.1 Similarity detection using NWV Due to technical hurdles and time constraints, it was not possible to evaluate the use of similarity detection within the FocusBuilder or other thinkLet. Directions to evaluate NWV in future within these thinkLets are described in chapter 11. The use of NWV did reveal positive results. The 2 groups that were given a list of instruments that was ordered according to the calculated similarities achieved a higher rate of reduction, lower redundancy level and higher level of shared understanding than the two groups that were given a randomly ordered list in the same amount of time. The questionnaire results confirm this, and confirm that the students appreciated the support from this technique.

One of the technical hurdles was that it was not possible to extract strings of synonyms longer than 2 words out of the synoniemen.net database. It was only possible to extract the main synonym (root word) to which the word that was entered belonged. In case the word entered was a main synonym, the API returned the same word. This is a design choice of the synoniemen.net database. Reducing the number of synonyms in the string reduces the range of words (and meaning) for which a point (a ‘1’) will be given in the vector. A point that could result in an angle that is considered to indicate similarity. From the fact that some of the equal relations are not detected it can be concluded that too little synonyms are included in the list. This has greatly reduced the potential to detect similarities. When another database is used, with more synonyms per word, the accuracy of detection is expected to increase. Also the number of false-positives is likely to increase; therefore the threshold values should be recalibrated. This database dependency clearly is a weak point for this

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

93

technique. For instance in workshops in technical domains, it is very likely that not all the terms (or company slang) used will exist in the database, and will therefore not be included in the analyses.

9.6.2 Divide&Conquer thinkLet The Divide&Conquer thinkLet has been used several times in two of the workshops to make pre-selections of brainstormed concepts. The goals for making these pre-selections were to quickly extract aspects of the playing field of energy transition, to formulate goals for energy transition, to make a pre-selection of brainstormed instruments and to pre select benefits of a concept after a brainstorm. The two groups that used the thinkLet consisted of 16 and 17 participants. The average time needed to complete the process was 5.8 minutes for the first round and 7.0 minutes for the first and second round of the thinkLet. The corresponding speed ratios range from 0.4 to 1.3. The average reduction level found in the two workshops is 50% with a standard deviation of 10%, the reduction level in the first round is higher than in the second round. In both workshops the thinkLet achieved a 100% on-topic concepts score for the concepts in the accepted category.

In the first workshop participant satisfaction with the process and result was positive, but not very high. The reason for this, distracted from the comments made by the participants, is the acceptance of the procedure. Some participants find it difficult to accept that they could not vote on all concepts and expressed concerns about the way in which rejected concepts were treated. To overcome this three guidelines are formulated:

• Explain and reach agreement on procedure in advance

• Always use 2 rounds of voting (or combine them)

• Give the opportunity to browse through the declined concepts and give room to move concepts from declined to accepted

In the second workshop these guidelines were used and the problem described above did not occur. Despite the problems with the acceptance of the procedure the participants of the first workshop scored both commitment metrics positive, as well as productivity and efficiency. This indicates the need for good explanation of the procedure at the start. For the second workshop the values for commitment with the process and result and for productivity and efficiency were slightly higher, but no statistical significant difference was found.

Gain in time Regarding the gain in time two observations are made. Since participants have to express less votes than when using a traditional voting method, either time is saved or participants take more time to express their votes. Judging on the time expenditure for voting in the workshop a combination of these two occurred. Also during the architecture evaluation workshop evidence for this was found. Participants indicate that they appreciated having more time to express a meaningful vote. One can argue that taking more time to express a vote increases the quality and comprehensiveness of the selection. Within this thesis no evidence was found to support this.

Application areas As can be concluded from the number of on-topic concepts and the level of reduction, the thinkLet can be used to quickly filter a set of concepts. Very likely after two (or one) rounds of voting the accepted category will contain only on-topic concepts, with a tendency towards unambiguously formulated concepts. The thinkLet can be used to make a pre-selection of a set of brainstormed

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

94

concepts. It is expected that the thinkLet will perform well in medium as well as in large groups. The thinkLet reduces the risk of dual-task interference, because the participants are given the opportunity to limit their attention to the selection task only (Heninger et al., 2006). The reduction in the number of votes to be expressed per participant reduces cognitive load (Sweller et al., 1998).Therefore the thinkLet is usable to provide structure to the convergence task by isolating the selection task, in the same way as is proposed in the PD-GSS approach (Helquist et al., 2007). The need for extra structure support is present when the number of concepts under consideration is large, the number of participants is large or when the facilitator is inexperienced (Faieta et al., 2006).

Redundant concepts In every voting process redundant concepts constitute a problem. Normally the advice is given to participants to rate redundant concepts equally (Briggs & de Vreede, 2009). For the Divide&Conquer thinkLet the same advice is valid. However it can occur that, because a selection is made per participant, one participant is presented with mainly redundant concepts on his/her screen. To overcome this the use of similarity detection before voting starts is added to the list for future research.

Voting scale The last workshop showed the benefit of using a more detailed scale for making the pre-selection. After the selection the acquired voting data could also directly be used to order the concepts, according to importance. This served as a welcome input for the discussion. On the other hand a less detailed three-point scale could lead to an increase of the speed of the method.

9.6.3 FocusBuilder thinkLet Based on the process oriented metrics it is concluded that the participants accepted the way of working and enjoyed working in this way. Technical aspects of the workshop execution can influence the participants’ perception of the workshop on all levels. Further commitment to the group goal and the process is a prerequisite for a perception of success by the participants. Based on the architecture of the thinkLet it is concluded that this thinkLet is scalable, since extra subgroups can be formed (or group size can be increased) without increasing the number of rounds needed to achieve one final list. Even with 16 sub groups of participants convergence to one final list can be achieved in 4 rounds. What does increase is the number of participants that need to collaborate in the second and third round. This could lead to longer execution times. Ideas to overcome this are to use representatives for sub groups, asking participants to start at different ends of the lists and to provide a scalable support platform as described in Appendix U. The speed ratio found in this workshop is similar to the value found in the case studies by Davis et al. (2008). The group used in this case study is three times larger than the groups used in the case study described by Davis et al. (2008), this supports the hypothesis regarding the scalability of this thinkLet.

Based on the result oriented metrics it is concluded that the changed FocusBuilder approach leads to a reduction of the number of concepts under consideration, is able to remove redundancy (off-topic items and redundant entries) and leads to an increased shared understanding of the concepts under consideration. These conclusions are similar to what Davis et al. (2008) have found. A difference of this case study is the improvement in comprehensiveness of the end result. We found that the risk of removal of critical concepts by sub-groups is present, but is very small. Davis et al. (2008) report comprehensiveness ratio’s in the range of 10% to 60% for 8 workshops in which the FocusBuilder thinkLet was used in the original format. In the Green-ICT workshop a comprehensiveness ratio of

Supporting convergence in groups

Evaluation of the Divide&Conquer thinkLet, modified FocusBuilder thinkLet and Normalized Word Vectors

95

94% was achieved, which illustrates the improvement made by removing the first round within the thinkLet. It is believed that this risk is reduced by removing the first step of the FocusBuilder thinkLet. In this first (removed) step, the participants first work individually on the brainstorm artefact.

Besides giving instructions at the start of the FocusBuilder and trying to keep the time schedule, the dependence of this thinkLet on the facilitator is relatively low. A GSS can be designed to assist the facilitator in these two tasks. Guiding the discussion and presentations in the last round of the thinkLet requires a little more effort from the facilitator, but it is possible to use the participants to identify and summarize redundant concepts by asking for proposals and validating this by asking the other participants.

Application areas To date no constraints have been found regarding the type of workshops or groups where the thinkLet can be used. Suggestions include creative problem solving, public participation and policy setting or evaluation workshops with medium to large teams of professionals. The details and instructions of the thinkLet described in Appendix P can be used to transfer the design to other facilitators and practitioners. The thinkLet matches the hypothesis posed on convergence process by Media Synchronicity Theory (MST) (Dennis, et al., 2008). The opportunities for feedback are high, because participants work in small teams and the level of parallelism is low because after each change in a sub set of the artefact a discussion with other participants follows. Of course the level of parallelism is higher than in thinkLets that use a completely plenary way of working, like the FocusBuilder. The slight increase in parallelism makes the FocusBuilder thinkLet scalable. According to Kincaid’s model of convergence (1994) the number of rounds of communication is reduced to the bare minimum to limit the influence of noise. The risk for noise is further limited by giving the participants a clear task description and a suitable environment to execute their task. Because the participants are presented a sub set of the concepts in the first round a reduction in cognitive load is achieved, opposed to a situation where a participant is asked to consider an entire brainstorm artefact at once.

Supporting convergence in groups

Conclusions

97

10 Conclusions In this thesis the main focus is on how to improve the successfulness and effectiveness of convergence processes. A design science approach was used to arrive at an answer to this question. The question is relevant because we have learned from our own experience in facilitating group work that converging can be a difficult and time consuming activity. Also a large body of literature describes hurdles with respect to the convergence pattern of collaboration. The hurdles described in literature can be summarized into (1) information overload at the beginning of the convergence task, (2) the high cognitive effort required to execute a convergence task and (3) the need for a higher granularity of meeting ideas to be stored for future decision making and analysis. These hurdles contribute to the fact that facilitators find convergence time consuming and difficult to facilitate. Effective and successful convergence is important because it enables a group to focus on the concepts that are important to reach their goal. Effective and successful convergence reduces the set of concepts under consideration, in such a way that the group is presented the concepts that they deem worthy of paying further attention to and thereby reduces cognitive load. Especially after a brainstorming activity effective and successful convergence is needed to reduce the set of concepts under consideration. In a brainstorming activity participants will typically create too much concepts. Further a brainstorm artefact is characterized by redundancy, similarity, ambiguity and a lack of shared understanding.

To assess the performance of a method for convergence objective and relevant metrics are needed. A literature survey and interviews with professional facilitators revealed a list of process oriented and result oriented dimensions that can be used to assess the success and effectiveness of a convergence pattern of collaboration and the convergence process that it creates. The process oriented dimensions are, in random order: acceptance, satisfaction, facilitator dependence, scalability, commitment, productivity and efficiency. The result oriented dimensions are, in random order: speed, redundancy, reduction, refinement, comprehensiveness, shared understanding, satisfaction and commitment. These dimensions of success and effectiveness can be measured by coding the artefacts that are used in and created during the convergence activity and by questioning the participants after the convergence activity. By using an input – process – output model and a corresponding UML class diagram of the attributes used in the first mentioned model, we have visualized to which artefact or process the metrics apply. This model can also be used for the selection of a method suited for a convergence task at hand.

To pin point the nature and source of the experienced and described hurdles regarding convergence we have made an overview and analysis of the database of methods for convergence that currently is available. A large number of methods for convergence was found from various sources, we have searched the thinkLet database, the IAF method database and have searched relevant literature. Classification of the methods found was possible by looking at the output a method creates and the way of working implied by the method. The output of a method for convergence is either a list of concepts or one statement representing the most important concepts. In case of a list a distinction can be made whether the list contains redundant items or not. The way of working is either completely plenary, a combination of parallel and plenary activities or depending on a third party for the convergence task. The way of working of a method determines to a large extend the speed, scalability and facilitator dependence of the method. For effective and successful convergence a match should exist between the goal and task description and the output a method creates. When

Supporting convergence in groups

Conclusions

98

comparing the methods a large degree of similarity was found. Elements shared by methods are (1) voting for making a (pre-) selection of concepts, (2) plenary discussion or presentations to create shared understanding, (3) the use of third-party experts to execute the convergence task, (4) the use of parallelism to increase speed and scalability. Within the thinkLet database and IAF method database also overlap was found in the (process) elements of which the thinkLets are composed. For almost every thinkLet a similar IAF method and vice versa exists. The literature review further revealed case studies in which the convergence process entirely depends on the skills and experiences of the facilitator, the use of tagging to enable improved control and measurement of the process, a variety of graphical methods that can be used by the facilitator to improve the structure of information, the idea of giving the facilitator a search function and an approach called participant-driven GSS. This approach uses five steps that can be initiated by the participants. The process is time and location independent.

We have identified a number of factors contribute to or influence the success and effectiveness of a convergence task. We have noted that plenary discussions or presentations foster the creation of shared understanding. Further a plenary way of working depends more on the facilitator than a parallel way of working. Also parallel ways of working are more scalable and take less time than plenary ways of working. Two ways of parallel working are identified, one where all participants perform the same task in parallel on the same concepts and one where participants work in parallel on different concepts and combine their work afterwards. This last way of parallel working is most scalable.

By using the example of a creative problem solving workshop, in which we tried to find a match between a method for convergence and a convergence task for different scenarios, we have identified opportunities for improvement. The differences in the scenarios were in the number of participants and facilitator skill level. The opportunities for improvement can be summarized into:

1. Removing the task of detecting redundant concepts away from the facilitator to lower his/her workload.

2. Improving the current hurdles that exist when converging in a parallel way, as is done is the FocusBuilder thinkLet among others. The current hurdles of this thinkLet include:

a. Lack of comprehensiveness of the end result b. Inability for the facilitator to monitor the process

3. Creating a scalable and fast pre-selection method. 4. Improving support for inexperienced facilitators to manage a convergence process in a large

group.

In response to these opportunities we have designed three artefacts. One new thinkLet, Divide&Conquer, is developed that enables large groups to quickly make a pre selection of concepts. Secondly we designed a modifier to the FocusBuilder thinkLet. This thinkLet supports the creation of shared understanding, achieves a reduction of the concepts under consideration and removes redundant concepts from a brainstorm artefact in a scalable and fast way. Thirdly a technique for similarity detection, normally used for plagiarism detection and automatic grading of written texts, is adapted and evaluated for use to detect redundant concepts in a brainstorm or convergence artefact. The technique uses normalized vector representations of concepts based on a thesaurus to detect similar concepts.

Supporting convergence in groups

Conclusions

99

Evaluation in groups of the technique for similarity detection, the new Divide&Conquer thinkLet and the modified FocusBuilder thinkLet revealed that:

• Even with a moderate detection rate of 50% participants are able to remove redundant concepts faster than participants that did not use the artefact in which concepts were ordered according to the automatic detected redundancies. Evaluation however is limited to one case study. Further evaluation is needed to validate the results and research the use of similarity detection within the new and other thinkLets.

• The Divide&Conquer thinkLet can be used within groups to quickly make a pre-selection of concepts that the group deems worthy of paying further attention to. The process and results of the thinkLet were accepted by the participants of two workshops, however the process needs thorough explanation before the start to reach agreement on the process. The thinkLet achieves a pre-selection quicker than other pre-selection methods because in principal less votes than the number of participants are collected per concept. Based on the average value and standard deviation it is decided whether more votes per concept are needed. This increases speed and therefore scalability of the pre-selection process. The pre-selections made in the evaluation workshops with this thinkLet contained only on-topic items and reduced the original brainstorm artefact by 50% on average with a standard deviation of 10%. Besides explaining the process and presenting the results no facilitator efforts are required.

• The modified FocusBuilder thinkLet can be used on a brainstorm artefact directly or after a pre selection has been made. The thinkLet fosters the creation of shared understanding and achieves a (further) reduction in the number of concepts under consideration by removing and summarizing redundant concepts and removing off-topic concepts. The thinkLet uses sub groups of participants that work on sub sets of concepts in parallel and convergence is achieved in three or four rounds. In previous case studies the comprehensiveness of the end result was too low. We removed the first round from the thinkLet, in which the participants work alone, to limit participant bias. Evaluation revealed that the comprehensiveness of the end result increased, without changing any other values that already were positive. Because of the parallel way of working the thinkLet is fast and scalable. Facilitator interventions are needed to explain the process and to present the end result, the real convergence effort is executed by the participants, therefore facilitator dependence of this thinkLet is low. The inability for the facilitator to monitor the process also is an opportunity for improvement of this thinkLet. A design for this is described, but is not evaluated.

The capabilities for a GSS to support these thinkLets are described and illustrated with a series of sketches, this finally resulted in a Scrum backlog. For the FocusBuilder thinkLet a mechanism is needed that divides the concepts into sub sets and supports multiple sub groups working on these sub sets at the same time. To monitor the process the facilitator needs access to the various participant screens. For the Divide&Conquer thinkLet a mechanism is needed that divides the concepts among the participant in such a way that only four votes per concept are collected. Based on the average voting score and standard deviation the GSS categorizes the concepts into three categories (accepted, declined and unsure). When desired by the facilitator the GSS should be able to collect more votes for the concepts in the unsure category. For the evaluation work in this thesis a version of the TeamSupport GSS was build that fully supports the Divide&Conquer thinkLet.

Supporting convergence in groups

Conclusions

100

To make use of similarity detection based on Normalized Word Vectors (NWV) the GSS needs (access to) a detailed thesaurus and should be capable of comparing each concept with the thesaurus to assemble a vector per concept. To detect similarities the angles between all created vectors have to be calculated. Based on the angles between the vectors the GSS should be able to indicate similar concepts by means of highlighting or grouping them for instance. To evaluate the use of similarity detection within the Divide&Conquer thinkLet the GSS should be able to order a list of concepts based on the calculated similarities. To evaluate the use of similarity detection within the FocusBuilder thinkLet the GSS should be able to create subsets of concepts that only contain similar concepts or sub sets that do not contain similar concepts. Also the level of similarity should be calculated on subset level to support the choice of which sub sets are to be combined in each round.

Supporting convergence in groups

Limitations and future research

101

11 Limitations and future research Besides the limitations already mentioned in sections 9.3.3, 9.4.3 & 9.5.3 regarding the participants and facilitation of the evaluation workshops a comment needs to be made regarding the coding and facilitation of the evaluation workshops. For the coding of the data from the workshops, which took about 45 hours per workshop, limited resources for validation of the coding effort were available. The author of this thesis served as the main coder and his work was checked by a second coder. Therefore it cannot be guaranteed that all errors in the coding have been removed completely. This, and the overall low number of evaluation workshops cal for future research to validate the performance of the three created artefacts.

The input – process – output model presented offers opportunities to develop a selection tool for a suitable method for convergence. Novice facilitators or practitioners can benefit from support for selecting the correct method for their convergence task. The UML class diagram presented in Figure 5.2 on page 33 can serve as a basis for a selection tool. First all methods in the ConvergenceMethodDatabase class, these are all methods from the ConvergenceMethod class, should be characterized by the attributes of the ConvergenceMethod Class. When the attributes from the TaskDescription and Context classes are known, the method that gives the best match between the attributes can be selected.

The thinkLet database proved to have similar or nearly similar methods. Possibly methods can be removed or combined, by using modifiers as described by Kolfschoten and Santanen (2007). This would add to the clarity and usability of the thinkLet database.

Regarding the automatic detection of similar concepts based on normalized word vectors the following opportunities for further research are identified. Unfortunately, due to technical hurdles and time constraints, it was not possible to further evaluate the use of similarity detection within the FocusBuilder or Divide&Conquer thinkLet within this project. Future research can concentrate on, besides the improvement of the accuracy of the technique itself (see section 7.1.2):

1. The use of the NWV technique within the FocusBuilder thinkLet to a. Order concepts in randomly created sub sets according to similarity; this should

lower the participants cognitive load and lead to an increase in time, shared understanding, comprehensiveness and reduction.

b. Create sub sets of concepts that are orthogonal in meaning; this should lead to an increase and speed, but might also lower shared understanding.

c. Create sub sets that overlap in meaning; this should lead to a decrease in speed, because it fosters the need for discussion between sub groups and thereby maximizes shared understanding.

d. Determine which sub sets are to be combined 2. The use of the NWV technique within the Divide&Conquer thinkLet to identify redundant

concepts before the voting process starts. Before voting starts the facilitator can quickly combine redundant concepts, for instance when the participants are on a coffee break.

3. The use of similarity detection real time during the entire workshop to assess the progress and quality of the workshop. Using similarity detection the facilitator can get an overview of

Supporting convergence in groups

Limitations and future research

102

several aspects of the generating activity, like the scope or focus. This information could be used to give the facilitator hints for interventions.

For the FocusBuilder thinkLet further research is needed to validate the result and process oriented values found in this single workshop. Preferably in large groups to validate the claim about the scalability of the thinkLet. After this, research is needed to validate the assumptions about the use of similarity detection within the thinkLet for the creation of sub sets of concepts and the combination of sub sets of concepts. It is suggested to execute further evaluation in real-life situations, the case study described here combined with the work of Davis et al. (2008) provide good indication that the thinkLet leads to a satisfying result in an effective and successful way. Besides using similarity detection for creating sub sets of concepts, it can also be used to create a suggestion for creating sub groups of participants. Combining participants that contributed similar concepts or combining participants that did not can have an effect on the speed, comprehensiveness and ambiguity of the end result.

Regarding the Divide&Conquer thinkLet, the experiences so far give enough confidence to use the thinkLet in organizational settings. Further case studies can focus on the fields of applicability of the thinkLet. Besides creative problem solving workshops or policy setting and evaluation workshops one can also think of large public participation process or large on-line discussions to use this thinkLet. Another opportunity for further research is the detection and removal of redundant concepts before the voting process starts.

Supporting convergence in groups

References

103

References Adkins, M., Burgoon, M., & Nunamaker Jr., J. F. (2003). Using group support systems for strategic

planning with the United States Air Force. Decision Support Systems, 34(3), 315-337. Adkins, M., Kruse, J., & Younger, R. (2004). A language technology toolset for development of a large

group augmented facilitation system. Paper presented at the Proceedings of the Hawaii International Conference on System Sciences, Big Island, HI.

Adkins, M., Younger, R., & Schwarz, R. (2003). Information technology augmentation of the skilled facilitator approach. Paper presented at the Proceedings of the 36th Hawaii International Conference on System Sciences.

Agres, A. B., Vreede, G.-J., de, & Briggs, R. O. (2005). A Tale of two Cities: Case Studies of Group Support Systems Transition. Group Decission and Negotiation, 14, 167 - 284.

Appelman, J. H., & van Driel, J. (2005). Crisis-Response in the Port of Rotterdam: Can We do Without a Facilitator in Distributed Settings? Paper presented at the Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 1 - Volume 01.

Ayres, P., & van Gog, T. (2009). State of the art research into Cognitive Load Theory. Computers in Human Behavior, 25(2), 253-257.

Badura, V., Read, A., Briggs, R. O., & de Vreede, G.-J. (2009). Exploring the Effects of a Convergence Intervention on the Artifacts of an Ideation Activity during Sensemaking Groupware: Design, Implementation, and Use (pp. 79-93).

Badura, V., Read, A., Briggs, R. O., & de Vreede, G.-J. (2010). Coding for Unique Ideas and Ambiguity: Measuring the Effects of a Convergence Intervention on the Artifact of an Ideation Activity. Paper presented at the Hawaii International Conference on System Sciences, Hawaii.

Baltes, B. B., Dickson, M. W., Sherman, M. P., Bauer, C. C., & LaGanke, J. S. (2002). Decision Making: A Meta-Analysis. Organizational Behavior and Human Decision Processes, 87, 156 - 179.

Bannert, M. (2002). Managing cognitive load - Recent trends in Cognitive Load Theory. Learning and Instruction, 12(1), 139-146.

Bragge, J., Merisalo-Rantanen, H., & Hallikainen, P. (2005). Gathering innovative end-user feedback for continuous development of information systems: A repeatable and transferable e-collaboration process. Ieee Transactions on Professional Communication, 48(1), 55-67.

Bragge, J., Merisalo-Rantanen, H., Nurmi, A., & Tanner, L. (2007). A Repeatable E-Collaboration Process Based on ThinkLets for Multi-Organization Strategy Development. Group Decision and Negotiation, 16(4), 363-379.

Bragge, J., Tuunanen, T., den Hengst, M., & Virtanen, V. (2005). A repeatable collaboration process for developing a road map for emerging new technology business: case mobile marketing. Paper presented at the Proceedingd of the eleventh americas conference on information systems, Omaha, NE, USA.

Briggs, R. O., & de Vreede, G.-J. (2009). ThinkLets: building blocks for concerted collaboration. University of Nebraska at Omaha: Center for collaboration science.

Briggs, R. O., de Vreede, G.-J., & Kolfschoten, G. L. (2008). ThinkLets for e-collaboration. In N. F. Kock (Ed.), Encyclopedia of E-Collaboration. Hershey, PA 17033, USA: Information Science Reference.

Briggs, R. O., de Vreede, G.-J., Nunamaker Jr, J. F., & Tobey, D. (2001). ThinkLets: Achieving Predictable, Repeatable Patterns of Group Interaction with Group Support Systems (GSS). Paper presented at the Hawaii International Conference on System Sciences.

Briggs, R. O., de Vreede, G.-J., & Nunamaker Jr., J. F. (2003). Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems. J. Manage. Inf. Syst., 19(4), 31-64.

Briggs, R. O., Kolfschoten, G. L., de Vreede, G.-J., & Douglas, D. (2006). Defining key concepts for collaboration engineering. Paper presented at the Americas Conference on Information Systems, Acapulco, Mexico.

Briggs, R. O., Nunamaker, J., J.F., & Sprague, J., R.H. (1997). 1001 Unanswered research questions in GSS. J. Manage. Inf. Syst., 14(3), 3-21.

Supporting convergence in groups

References

104

Briggs, R. O., Reinig, B. A., & De Vreede, G. J. (2006). Meeting satisfaction for technology-supported groups: An empirical validation of a goal-attainment model. Small Group Research, 37(6), 585-611.

Briggs, R. O., Reinig, B. A., Shepherd, M. M., Yen, J., & Nunamaker Jr., J. F. (1997). Quality as a function of quantity in electronic brainstorming. Proceedings of the Thirtieth Hawaii International Conference on Systems Sciences.

Carlson, J. R., & George, J. F. (2004). Media appropriateness in the conduct and discovery of deceptive communication: The relative influence of richness and synchronicity. Group Decision and Negotiation, 13(2), 191-210.

Carlsson, S. A. (2006). Towards an information systems design research framework: a critical realist perspective. Paper presented at the DESRIST, Claremont, CA.

Catledge, L. D., & Potts, C. (1996). Collaboration during conceptual design. Paper presented at the Proceedings of the IEEE International Conference on Requirements Engineering.

Cellier, J. M., & Eyrolle, H. (1992). Interference between switched tasks. Ergonomics, 35(1), 25-36. Chen, H., Hsu, P., Orwig, R., Hoopes, L., & Nunamaker, J. F. (1994). Automatic concept classification

of text from electronic meetings. Commun. ACM, 37(10), 56-73. Chen, M., Liou, Y., Wang, C. W., Fan, Y. W., & Chi, Y. P. J. (2007). TeamSpirit: Design, implementation,

and evaluation of a Web-based group decision support system. Decision Support Systems, 43(4), 1186-1202.

Compendium Institute (2009). Compendium Download Retrieved august, 2009, from http://compendium.open.ac.uk/institute/download/download.htm

Conklin, J. (2006). Dialogue Mapping: building shared understanding of wicked problems. Chichester, West Sussex, England: John Wiley & Sons.

Cooperrider, D. L., & Whitney, D. (2009). A positive revolution in change: appreciative inquiry, from http://appreciativeinquiry.case.edu/uploads/whatisai.pdf

Crealogic (2009). crealogic.nl - productinformatie Retrieved september, 2009, from http://www.crealogic.nl/site/productinformatie.html

CS Results GMBH (2006). Nemo, network moderating Retrieved 5 oktober, 2007, from www.nemo.de Davis, A., Badura, V., & de Vreede, G.-J. (2008). Understanding methodological differences to study

convergence in group support system sessions. In R. O. Briggs (Ed.), Groupware: Design, Implementation and Use. Germany: Springer-Verlag Berlin Heidelberg.

Davis, A., de Vreede, G.-J., & Briggs, R. O. (2007). Designing thinkLets for convergence. Paper presented at the Thirteenth Americas conference on information systems, Keystone, Colorado.

de Bruijn, H., & de Vreede, G.-J. (1999). Exploring the boundaries of successful GSS application: Supporting inter-organizational policy networks. Paper presented at the Proceedings of the Hawaii International Conference on System Sciences.

de Vreede, G.-J., Boonstra, J., & Niederman, F. (2002). What Is Effective GSS Facilitation? A Qualitative Inquiry into Participants' Perceptions. Paper presented at the Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS'02)-Volume 1 - Volume 1.

de Vreede, G.-J., & Briggs, R. O. (2005). Collaboration Engineering: Designing Repeatable Processes for High-Value Collaborative Tasks. Paper presented at the System Sciences, 2005. HICSS '05. Proceedings of the 38th Annual Hawaii International Conference on.

de vreede, G.-J., Briggs, R. O., & Massey, A. P. (2009). Collaboration Engineering: Foundations ans Opportunities: Editorial to the special issue on the Journal of the Association of Information Systems. Journal of the association for information systems, 10(special issue), 121 - 137.

de Vreede, G.-J., Davison, R., M., & Briggs, R. O. (2003). How a Silver Bullet may lose its shine. Communications of the ACM, 46(8), 96 - 101.

de Vreede, G.-J., Fruhling, A. L., & Chakrapani, A. (2005). A Repeatable Collaboration Process for Usability Testing. Paper presented at the Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 1 - Volume 01.

Supporting convergence in groups

References

105

de Vreede, G.-J., Niederman, F., & Paarlberg, I. (2001). Measuring participants' perception on facilitation in group support systems meetings. Paper presented at the Proceedings of the 2001 ACM SIGCPR conference on Computer personnel research.

de Vreede, G.-J., Vogel, D., Kolfschoten, G. L., & Wien, J. (2003). Fifteen Years of GSS in the Field: A Comparison Across Time and National Boundaries. Paper presented at the Proceedings of the 36th Hawaii International Conference on System Sciences, Hawaii.

de Vreede, G.-J., & Wijk, W., van (1997). A field study into the organizational application of group support systems. Communications of the ACM, 04, 151 - 159.

Dean, D. L., Orwig, R. E., & Vogel, D. R. (2000). Facilitation Methods for Collaborative Modeling Tools. Group Decision and Negotiation, 9(2), 109-127.

DeLuca, D., & Valacich, J. S. (2005). Outcomes from conduct of virtual teams at two sites: Support for media synchronicity theory. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences.

den Hengst, M., & Adkins, M. (2005). The demand rate of facilitation functions. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences, Big Island, HI.

den Hengst, M., & Adkins, M. (2007). Which collaboration patterns are most challenging: a global survey of facilitators. Paper presented at the Hawaii international conference on system sciences.

den Hengst, M., van de Kar, E., & Appelman, J. (2004). Designing mobile information services: user requirements elicitation with GSS design and application of a repeatable process. Paper presented at the System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on.

Dennis, A. R., Fuller, R. M., & Valacich, J. S. (2008). Media, tasks, and communication processes: A theory of media synchronicity. MIS Quarterly: Management Information Systems, 32(3), 575-600.

Dennis, A. R., & Valacich, J. S. (1999). Rethinking Media Richness: Towards a Theory of Media Synchonicity. Paper presented at the 32nd Hawaii international conference on system Sciences, Hawaii.

Dennis, A. R., Wixom, B. H., & Vandenberg, R. J. (2001). Understanding fit and appropriation effects in group support systems via meta-analysis. MIS Quarterly: Management Information Systems, 25(2), 167-193.

Deshpande, N., de Vries, B., & van Leeuwen, J. P. (2005). Building and supporting shared understanding in collaborative problem-solving. Paper presented at the Proceedings of the International Conference on Information Visualisation, London.

Dessus, P. (2009). An overview of LSA-based systems for supporting learning and teaching, Frontiers in Artificial Intelligence and Applications (Vol. 200, pp. 157-164).

Dreher, H. (2007). Automatic concept analysis for plagiarism detection. Issues in Informing Science and Information Technology, 4, 601 - 615.

Dreher, H., & Williams, R. (2006). Assisted query formulation using normalised word vector and dynamic ontological filtering. FQAS 2006, LNAI 4027, 282 - 294.

Duivenvoorde, G. P. J. (2007). Support for sharing & capturing of best practice . Unpublished internship report, Delft University of Technology, Delft, The Netherlands.

Duivenvoorde, G. P. J. (2008). Commitment in GSS workshops: een causale analyse naar de rol van commitment in het success van GSS workshops. Unpublished bachelor thesis (part 2 of 2), Delft University of Technology, Delft, The Netherlands.

Duivenvoorde, G. P. J., Kolfschoten, G. L., Briggs, R. O., & de Vreede, G.-J. (2009). Towards an Instrument to Measure Successfulness of Collaborative Effort from a Participant Perspective. Paper presented at the Hawaii International Conference on System Sciences.

Easton, G. K., George, J. F., Nunamaker, J., J.F., & Klein, G. (1990). Using two different electronic meeting system tools for the same task: An experimental comparison. J. Manage. Inf. Syst., 7(1), 85-100.

Supporting convergence in groups

References

106

Ellis, C. A., Gibbs, S. J., & Rein, G. (1991). Groupware: some issues and experiences. Communications of the ACM, 34(1), 39 - 58.

Facilitate.com (2009). Web meeting & decission making software for effective meetings Retrieved July, 2009, from www.facilitate.com

Faieta, B., Huberman, B., & Verhaeghe, P. (2006). Scalable Online Discussions as Listening Technology. Paper presented at the Proceedings of the 39th Annual Hawaii International Conference on System Sciences - Volume 01.

Fruhling, A., & de Vreede, G.-J. (2006). Collaborative Usability Testing to Facilitate Stakeholder Involvement Value-Based Software Engineering (pp. 201-223).

Fruhling, A., Steinhauser, L., Hoff, G., & Dunbar, C. (2007). Designing and Evaluating Collaborative Processes for Requirements Elicitation and Validation. Paper presented at the Hawaii International Conference on System Sciences.

Giordano, G. A. (2005). Task complexity and deception detection in a collaborative group setting. Paper presented at the HICSS.

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the association for information systems, 8(5), 312 - 335.

GroupSupport (2009). GroupSupport Retrieved november, 2009, from http://www.groupsupport.com/index.html

Groupsystems.com (2007). Groupsystems: ThinkTank: The leading virtual interactive meeting, brainstorming, facilitation & collaboration technology Retrieved 6 oktober, 2007, from http://www.groupsystems.com/technology/products-and-services/thinktank

Harder, R., & Higley, H. (2002). Applying a group support system to mission analysis. 6th World Multiconference on Systemics, Cybernetics and Informatics, Vol Viii, Proceedings, 370-375.

Harder, R. J., & Higley, H. (2004). Application of ThinkLets to team cognitive task analysis. Paper presented at the System Sciences, 2004. Proceedings of the 37th Annual Hawaii International Conference on.

Harder, R. J., Keeter, J. M., Woodcock, B. W., Ferguson, J. W., & Wills, F. W. (2005). Insights in implementing Collaboration Engineering. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences, Big Island, HI.

Hayne, S. C. (1999). The facilitators perspective on meetings and implications for group support systems design. Data Base for Advances in Information Systems, 30(3-4), 72-90.

Helquist, J. H., Santanen, E. L., & Kruse, J. (2007). Participant-driven GSS: Quality of brainstorming and allocation of participant resources. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences.

Heninger, W. G., Dennis, A. R., & Hilmer, K. M. (2006). Individual cognition and dual-task interference in group support systems. Information Systems Research, 17(4), 415-424.

Hermans, L. M., & Thissen, W. A. H. (2009). Actor analysis methods and their use for public policy analysts. European Journal of Operational Research, 196(2), 808-818.

Herrmann, T. (2009). Design heuristics for computer supported collaborative creativity. Paper presented at the Proceedings of the 42nd Annual Hawaii International Conference on System Sciences, HICSS.

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design Science in Information Systems Research. MIS Quarterly, 28(1), 75 - 105.

Hirsch, P. L., & McKenna, A. F. (Writer) (2008). Using reflection to promote teamwork understanding in engineering design education, International Journal of Engineering Education.

Huang, W. W., Wei, K. K., Watson, R. T., & Tan, B. C. Y. (2003). Supporting virtual team-building with a GSS: an empirical investigation. Decision Support Systems, 34(4), 359-367.

IAF (2004). Mission, Values and Vision Retrieved July, 2009, from http://www.iaf-world.org/i4a/pages/index.cfm?pageid=3343

in 't Veld, J. (1987). Analyse van organisatie problemen. Leiden: Stenfert Kroese. Jenkins, J. C. (2008). IAF Methods Database Retrieved July, 2009, from www.iaf-methods.org

Supporting convergence in groups

References

107

Jenkins, J. C., Bootsman, P., & Coerts, J. (2004). IAF Methods Database: Explanation of Taxonomy. Groningen, Netherlands.

Johansen, R. (1988). GroupWare: Computer Support for Business Teams: The Free Press. Kirschner, P. (2002). Can we support CSCL? Educational, social and technological affordances for

learning: Open University of the Netherlands. Kirschner, P. A., Buckingham Shum, S. J., & Carr, S. (2003). Visualizing Argumentation: software tools

for collaborative and educational sense-making. London: Springer - Verlag. Koch, I. (2008). Mechanisms of interference in dual tasks. Mechanismen der Interferenz in

Doppelaufgaben, 59(1), 24-32. Kolfschoten, G. L. (2007). Theoretical Foundations for Collaboration Engineering. Delft: Department

of Systems Engineering Faculty of Technology Policy and Management Delft University of Technology.

Kolfschoten, G. L., Appelman, J. H., Briggs, R. O., & de Vreede, G.-J. (2004). Recurring patterns of facilitation interventions in GSS sessions. Paper presented at the Proceedings of the Hawaii International Conference on System Sciences, Big Island, HI.

Kolfschoten, G. L., Briggs, R. O., de Vreede, G.-J., Jacobs, P. H. M., & Appelman, J. H. (2006). A conceptual foundation of the thinkLet concept for Collaboration Engineering. International Journal of Human-Computer Studies, 64(7), 611-621.

Kolfschoten, G. L., den Hengst-Bruggeling, M., & de Vreede, G. J. (2007). Issues in the design of facilitated collaboration processes. Group Decision and Negotiation, 16(4), 347-361.

Kolfschoten, G. L., & Lee, C. (2010). Self-Guiding Group Support Systems: Can Groups Use GSS without Support? Paper presented at the Hawaii International Conference on System Sciences.

Kolfschoten, G. L., Lowry, P. B., Dean, D. L., & Kamal, M. (2007). A measurement framework for patterns of collaboration.

Kolfschoten, G. L., & Santanen, E. L. (2007). Reconceptualizing Generate thinkLets: the Role of the Modifier. Paper presented at the Proceedings of the 40th Annual Hawaii International Conference on System Sciences.

Koneri, P. G., de Vreede, G.-J., Dean, D. L., Fruhling, A. L., & Wolcott, P. (2005). The design and field evaluation of a repeatable collaborative software code inspection process. Groupware: Design, Implementation, and Use, 3706, 325-340.

Krych-Appelbaum, M., Law, J. B., Jones, D., Barnacz, A., Johnson, A., & Keenan, J. P. (2007). "I think I know what you mean": The role of the theory of mind in collaborative communication. Interaction Studies, 8(2), 267-280.

Larman, C. (2004). Agile and iterative development: a manager's guide. Boston: Pearson Education, Inc.

Lowry, P. B., Dean, D. L., Roberts, T. L., & Marakas, G. (2009). Toward building self-sustaining groups in PCR-based tasks through implicit coordination: the case of heuristic evaluation. JAIS, 10(special issue), 170 - 195.

Martz, B., Vogel, D., & Nunamaker, J. (1992). Electronic Meeting Systems: Results from the field. Decision Support Systems(april 1992).

MeetingDragon (2009). MeetingDragon for venues Retrieved July, 2009, from http://www.meetingdragon.com/index.asp

Meetingworks (2009). Meetingworks - Electronic meeting and collaboration Retrieved July, 2009, from www.meetingworks.com

Miller, E. J., & Rice, A. K. (1967). Systems of organization: The control of task and sentient boundaries. London: Tavistock Publications.

Miranda, S. M., & Bostrom, R. P. (1997). Meeting facilitation: Process versus content interventions. Paper presented at the Proceedings of the Hawaii International Conference on System Sciences, Wailea, HI, USA.

Montibeller, G., Shaw, D., & Westcombe, M. (2006). Using decision support systems to facilitate the social process of knowledge management. Knowledge Management Research and Practice, 4(2), 125-137.

Supporting convergence in groups

References

108

Nabukenya, J., Van Bommel, P., & Proper, H. A. E. (2008). Repeatable collaboration processes for mature organizational policy making, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5411 LNCS, pp. 217-232). Omaha, NE.

Nederlandse Taalunie (2004). het Corpus Gesproken Nederlands Retrieved November, 2009, from http://lands.let.ru.nl/cgn/

Nonaka, I., & Konno, N. (1995). The knowledge creating company. Oxford: England: Oxford University Press.

Noor, M., Grünbacher, P., & Hoyer, C. (2008). A Collaborative Method for Reuse Potential Assessment in Reengineering-Based Product Line Adoption Balancing Agility and Formalism in Software Engineering (pp. 69-83).

Nunamaker, J., Jr., D.R. Vogel, A. Hemiger, B. Martz, R. Grohowski, & C. McGoff (1989). Experiences at IBM with Group Support Systems. Decision Support Systems, 5(2), 183 - 196.

Nunamaker, J. F., Dennis, A. R., Valacich, J. S., Vogel, D., & George, J. F. (1991). Electronic meeting systems. Commun. ACM, 34(7), 40-61.

Nunamaker Jr., J. F. (1997). Future research in group support systems: Needs, some questions and possible directions. International Journal of Human Computer Studies, 47(3), 357-385.

Oliveira, A. W., & Sadler, T. D. (2008). Interactive patterns and conceptual convergence during student collaborations in science. Journal of Research in Science Teaching, 45(5), 634-658.

Orwig, R. E., Chen, H., & Nunamaker, J., J.F. (1997). A graphical, self-organizing approach to classifying electronic meeting output. J. Am. Soc. Inf. Sci., 48(2), 157-170.

Osborn, A. F. (1993). Applied imagination: principles and procedures of creative problem-solving. Buffalo, New York: Creative education foundation press.

Parker, K. R., Williams, R., Nitse, P. S., & Tay, A. S. M. (2008). Use of the Normalized Word Vector Approach in document classification for an LKMC. Issues in Informing Science and Information Technology, 5, 513 - 524.

Picture It Solved (2006). Picture it solved - a visual approach to thinking Retrieved august, 2009, from http://www.pictureitsolved.com/resources/dialoguemapping.cfm

Post, B. Q. (1993). A Business Case Framework for Group Support Technology. Journal of Management Information Systems, 9(3), 7 - 26.

Read, A., Renger, M., Briggs, R. O., & de Vreede, G.-J. (2009). Fundamental topics of organizing: A research agenda. Paper presented at the Proceedings of the 42nd Annual Hawaii International Conference on System Sciences, HICSS, Waikoloa, HI.

Rogers, E., & Kincaid, D. L. (1981). Communication networks: Toward a new paradigm for research. London: The Free Press.

Rohde, M., Stevens, G., Broedner, P., & Wulf, V. (2009). Towards a paradigmatic shift in IS: designing for social practice. Paper presented at the DESRIST'09, Malvern, PA, USA.

Rosenthal, S., & Finger, S. (2006). Design collaboration in a distributed environment. Paper presented at the Proceedings - Frontiers in Education Conference, FIE, San Diego, CA.

Rutkowski, A. F., Vogel, D., Bemelmans, T. M. A., & van Genuchten, M. (2002). Group support systems and virtual collaboration: The HKNet project. Group Decision and Negotiation, 11(2), 101-125.

Samarah, I., Paul, S., & Tadisina, S. (2008). Knowledge conversion in GSS-aided virtual teams: An empirical study. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences, Big Island, HI.

Samaran, I., Paul, S., & Tadisina, S. (2007). Collaboration technology support for knowledge conversion in virtual teams: A theoretical perspective. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences, Big Island, HI.

Santanen, E. L. (2005). Resolving ideation paradoxes: Seeing apples as oranges through the clarity of ThinkLets. Paper presented at the Proceedings of the Annual Hawaii International Conference on System Sciences.

Supporting convergence in groups

References

109

Santanen, E. L., Briggs, R. O., & de Vreede, G.-J. (2002). Towards an understanding of creative solution generation. Paper presented at the Hawaii international conference on system sciences.

Santanen, E. L., Briggs, R. O., & de Vreede, G.-J. (2004). Causal relationships in creative problem solving: comparing facilitation interventions for ideation. J. Manage. Inf. Syst., 20(4), 167-198.

Santanen, E. L., & de Vreede, G.-J. (2004). Creative Approaches to Measuring Creativity: Comparing the Effectiveness of Four Divergence thinkLets. Paper presented at the Hawaii international conference on system sciences, Big Island, Hawaii.

Sawicka, A. (2008). Dynamics of cognitive load theory: A model-based approach. Computers in Human Behavior, 24(3), 1041-1066.

Schwaber, K., & Beedle, M. (2001). Agile Sofware Development with Scrum. Upper Saddle River, New Yersey: Prentice Hall.

Shared Space Institute (2010). Over shared space Retrieved march 2010, from http://www.sharedspace.eu/nl/over-ons/wat-is-shared-space

Shen, Q. P., Chung, J. K. H., Li, H., & Shen, L. Y. (2004). A Group Support System for improving value management studies in construction. Automation in Construction, 13(2), 209-224.

Slater, J. S., & Anderson, E. (1994). Communication convergence in electronically supported discussions: An adaptation of Kincaid's convergence model. Telematics and Informatics, 11(2), 111-125.

Smith, J. M., & Smith, D. C. P. (1977). Database abstractions: aggregation and generalization. ACM Trans. Database Syst., 2(2), 105-133.

Sweller, J., Merrienboer, J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10, 251 - 296.

Tarmizi, H., Payne, M., Noteboom, C., Zhang, C., Steinhauser, L., de Vreede, G.-J., et al. (2006). Technical and Environmental Challenges of Collaboration Engineering in Distributed Environments Groupware: Design, Implementation, and Use (pp. 38-53).

Taylor, D. W., Berry, P. C., & Block, C. H. (1958). Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking? Administrative Science Quarterly, 3(1), 23-47.

Te'eni, T. (2001). Review: a cognitive-affective model of organizational communication for designing IT. MIS Q., 25(2), 251 - 312.

TeamSupport (2007). TeamSupport - The online collaboration solution for your team Retrieved june, 2009, from http://www.teamsupport.net/nl/home.php

Teamworks (2009). teamworks Retrieved July, 2009, from http://www.teamworks.si/index.php?id=zacnemo_e.php&to=0&la=1

Tomasello, M., Carpenter, M., Call, J., Behne, T., & Moll, H. (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675-691.

University of Nebraska at Omaha (2009). Dr. Gert-Jan de Vreede: Home Page Retrieved october, 2009, from http://faculty.ist.unomaha.edu/gdevreede/

van Kol, A. (2006). synoniemen.net - gratis online synoniemenwoordenboek Retrieved januarie, 2010, from synoniemen.net

ViaGroep N.V. (2009). kort CV prof.dr.ing. Hans Mulder, MscBA Retrieved october, 2009, from http://www.viagroep.nl/Deskundigenonderzoek/kortCVprofdringHansMulderMBA/tabid/1066/Default.aspx

Vivacqua, A., Marques, L. C., & De Souza, J. M. (2008). Assisting meeting facilitation through automated analysis of group dynamics. Paper presented at the Proceedings of the 2008 12th International Conference on Computer Supported Cooperative Work in Design, CSCWD, Xi'an.

Warr, A., & O'Neill, E. (2005). Understanding design as a social creative process. Paper presented at the Proceedings of the 5th conference on Creativity and Cognition.

Weick, K. E. (1985). Cosmos vs. chaos: Sense and nonsense in electronic contexts. Organizational Dynamics, 14(2), 51-64.

Supporting convergence in groups

References

110

Wikipedia (2008). Social constructionism Retrieved July, 2009, from http://en.wikipedia.org/wiki/Social_constructionism

Wikipedia (2009a). Inquiry-based learning Retrieved July, 2009, from http://en.wikipedia.org/wiki/Inquiry-based_learning

Wikipedia (2009b). Tag (metadata) Retrieved August, 2009, from http://en.wikipedia.org/wiki/Tag_%28metadata%29

Williams, R., & Dreher, H. (2004). Automatically grading essays with Markit. Issues in Informing Science and Information Technology, ?, 693 - 700.

Wong, Z., & Aiken, M. (2003). Automated facilitation of electronic meetings. Information and Management, 41(2), 125-134.

Zing Technologies (2009). Zing technologies Retrieved July, 2009, from http://www.anyzing.com/businesstitles.html

Supporting convergence in groups

Overview of guidelines for IS research

111

Appendix A Overview of the guidelines for IS research This appendix describes the implications of the guidelines developed by Hevner et al. (2004).

Design as an artefact According to Hevner and colleagues (2004), the ‘result of a design-science research in IS is, by definition, a purposeful IT artefact created to address an important organizational problem’. In this research two artefacts will be created, one is a ‘toolbox’ containing methods for convergence and the other is a GSS tool that is capable of supporting these methods.

Problem relevance Legitimately Hevner et al. (2004) state that ‘the relevance of any design-science research effort is with respect to a constituent community’. The constituent community for this research is composed of every employee interested in efficient collaboration within his project time, business unit or organisation. But also practitioners, facilitators and collaboration engineers belong to the community. The research is relevant to this community because it proposes a solution for the time consuming step of convergence in GSS supported meetings. This convergence step is also characterised by a high level of cognitive load for both the facilitator or practitioner and the participants. The outcome of this research will minimize the cognitive load. The research is also relevant to this community because it removes one of the hurdles for successful GSS implementation within organisations.

Design evaluation As stated by Hevner et al. (2004): ‘the business environment establishes the requirements upon which the evaluation of the artefact is based’. Hevner et al. (2004) suggests some methods for design evaluation. Davis et al. (2007) already remarked that analytical optimization and structural white box testing are not considered applicable for the evaluation of a convergence method. Also the descriptive methods are considered not powerful enough. Davis et al. (2007) also describe how the other evaluation methods that are described by Hevner et al. (2004) can be applied to evaluate a method for convergence. A short overview is given in the table below.

Evaluation method Goal Is achieved by

Observational – case study (Hevner et al., 2004)

Collect qualitative and quantitative data in the field (Davis et al., 2007).

Executing a real-life workshop in which the method is used. after an during the workshop record data from the workshop and the perceptions from the participants and facilitator (Davis et al., 2007).

Observational – field study (Hevner et al., 2004)

Cross-case analysis to gain deeper insights into persistent patterns regarding the performance of method (Davis et al., 2007).

Comparing the qualitative and quantitative data from multiple case studies (Davis et al., 2007).

Analytical – static analysis (Hevner et al., 2004)

Examining the structure of the designed method (Davis et al., 2007).

Inspection of documentation by expert facilitators and collaboration engineers. Assessment of strengths and weaknesses with respect to performance criteria (Davis et al., 2007).

Analytical – architecture analysis (Hevner et al., 2004)

Demonstration of fit of method with tool (GSS) (Davis et al., 2007).

Defining the specific capabilities of the GSS platform on which the method is to be implemented and demonstrate the fit (Davis et al., 2007).

Supporting convergence in groups

Overview of guidelines for IS research

112

Analytical – dynamic analysis (Hevner et al., 2004)

Evaluation of dynamic qualities of the method (Davis et al., 2007).

Observational and experimental methods (Davis et al., 2007).

Experimental – controlled experiment (Hevner et al., 2004)

Assess the performance of the method (Davis et al., 2007).

An experiment. Key consideration is the scope and nature of the task (Davis et al., 2007).

Experimental – simulation (Hevner et al., 2004)

Support laboratory experiments by reducing the number of subjects needed (Davis et al., 2007).

Configuring a computer to simulate the behaviour of human participants (Davis et al., 2007).

Testing – functional (black box) testing (Hevner et al., 2004)

Uncover design flaws and fine tune the method’s elements (Davis et al., 2007).

Executing the method in a pilot group (Davis et al., 2007).

Overview of evaluation methods for convergence.

Because this research aims to design a method for convergence and a tool to support the method, at least architecture analysis is needed to assess the fit between the method and the tool.

Further it would be nice to perform a case study, since this would provide an opportunity to assess the performance of the designed method in real-life. A case study also is the preferred evaluation method of Rhode et al. (2009). Unfortunately a case study is not always possible and hard to arrange for a method of which no proof exists that it leads to a good result. Static analysis and functional (black box) testing are a good first step to gain confidence in the performance of the method.

Research contributions The research contributes by providing insight and (a) method(s) for convergence, including a tool to support this method.

Research rigor As Hevner et al. (2004) point out: ‘rigor is derived from the effective use of the knowledge base – theoretical foundations and research methodologies’.

Design as a search process The two evaluation processes in the research design allow for an iterative way of designing.

IS Research As implied by the research questions, the research aims to add to the body of knowledge regarding the convergence on information by groups. The research does so by providing an overview and assessment of current methods in this field.

Part of the research is to design a method for convergence, by combining the strengths of current methods and approaches. To assess the quality of the method, thorough evaluation is needed, preferably in the same environment as where the method is to be used.

Supporting convergence in groups

Overview of articles

113

Appendix B Overview of articles Wong and Aiken (2003) describe an experiment were they compared different facilitation techniques (expert, novice and automated) using a simple process of two brainstorming tasks and a voting task. In their set up they did not use a convergent step between the brainstorm and voting, also no problems with this set up are described (Wong & Aiken, 2003). This is remarkable because direct voting after brainstorming should normally lead to problems with redundant and unclear entries, which makes voting not possible.

Vivacqua et al. (2008) describe very interesting research in the field of facilitation assistance in GSS workshops. They added the use of tagging during brainstorming for measuring group dynamics to early notify the facilitator about possible conflicts etc. They expect that the use of tagging also has an influence on the divergent step of the workshop, in terms of speed and cognitive load. Unfortunately they do not have any test data yet (Vivacqua, et al., 2008). The author was contacted for further information.

Tomasello et al. (2005) explain that a shared goal is needed to create shared understanding.

Slater et al. (1994) acknowledge the difficulty of reducing and clarifying a list of brainstormed items. She uses an adapted version of Kincaid’s model of convergence to illustrate the process of convergence in GSS workshops. The model consists of several rounds and uses the concept of noise to denote unwanted outcomes (Slater & Anderson, 1994). In the article no methods or best practices are given.

Shen et al. (2004) describe using GSS in a construction value management process. In their process they stress the importance of the facilitator to guide the collaboration process. They mention the importance of the facilitator converging information to the group, but do not give any best practices or methods for it (Shen, et al., 2004). Process facilitation has a positive impact on the meeting process (Miranda & Bostrom, 1997). In a two year research Samaran et al. (2007), among others, also conclude that facilitation is critical to support convergence of ideas and concepts in GSS supported teams (Samarah, Paul, & Tadisina, 2008; Samaran, et al., 2007). In their research they focused on knowledge conversion in GSS aided virtual teams. Den Hengst et al. (2005) give a classification of facilitation functions and give an overview of the difficulty of those functions as perceived by a number of facilitators (den Hengst & Adkins, 2005).

Results from the HKNet project do not address the exact nature of the facilitation interventions (Rutkowski, Vogel, Bemelmans, & van Genuchten, 2002).

With regard to co-located vs. geographically dispersed teams Rosenthal et al. found evidence that dispersed teams formulate their ideas better (Rosenthal & Finger, 2006). This is more easy input for a convergence step.

Research on conceptual convergence within student groups by Oliveira et al. (2008) suggests that the best atmosphere for this process is characterized by friendliness, active participation and a critical attitude without being disrespectful (Oliveira & Sadler, 2008).

Nunamaker (1997) acknowledges the problem of convergence and information overload. He argues that some sort of knowledge management process is needed. The Nominal Group Technique is given as an example. Also graphical representation and the availability of multiple views of the

Supporting convergence in groups

Overview of articles

114

information should make it easier for participants to focus and make sense (Nunamaker Jr., 1997). Nunamaker also notes the problem of having a trained facilitator within organizations.

Compendium (used in the KM field); use dialogue mapping as a tool to build shared understanding (Montibeller, et al., 2006). The facilitator asks a question and the group starts giving input (brainstorming). The facilitator maps these answers in the tool using the dialogue mapping notation.

Krych-Appelbaum et al. (2007) suggest using Theory of Mind to understand how one should communicate information effectively in conversations (Krych-Appelbaum, et al., 2007).

Koneri et al. (2005) have used a thinkLet based collaboration process for a software code inspection task. They executed the process in a conventional and in a GSS environment. The process consisted of a brainstorming task, followed by a convergence and voting step. For convergence the Concentration thinkLet was used. The paper does not exactly describe the experiences with this thinkLet, but does only state that the overall experience with the process was good in both the GSS and conventional environment (Koneri, de Vreede, Dean, Fruhling, & Wolcott, 2005).

The use of goal setting before the start of a collaboration process can significantly increase its success, as indicated by Huang et. al (Huang, Wei, Watson, & Tan, 2003)

Reflection is proposed as a way to promote teamwork (Hirsch & McKenna, 2008).

A study by Herrmann (2009) on supporting creativity in computer supported environments also reveals that facilitators find that converging is a time consuming step that is not efficiently supported by current groupware functions (Herrmann, 2009). In the study the task was to reduce a set of brainstormed ideas by prioritizing them. The difficulty was not to lose valuable contributions. In their experiments they found that ‘continuous documentation is essential to support synergy building but has to be as unobtrusive as possible to avoid production blocking’ (Herrmann, 2009). Continuous documentation should include semi-automatic identification of correlations, simultaneous clustering and documentation, directing attention to neglected or conflicting aspects, dialogue mapping.

In a field study of the US army an attempt was shown to implement CE. The researchers made use of three theories / methods; Collaboration Engineering, GSS and the Skilled Facilitator Approach. In their experiment they found that ground rule #6 (SFA) needs to be followed when executing a convergence pattern of collaboration. In the experiment the ExpertChoice and Review/Reflect thinkLets were used. Ground rule #6 reads: ‘Combine advocacy and inquiry’ (R. J. Harder, et al., 2005).

Shared understanding: ‘shared understanding is an objected state achieved through interactive processes by which a common ground between individuals is constructed and maintained.’ (Deshpande, et al., 2005). According to Deshpande et al. two things are needed to create shared understanding, (1) participants should be able to hear or see each other’s contributions and (2) participants should be able to integrate each other’s contributions (Deshpande, et al., 2005). The researchers an arena, that they call the collaborative problem solving space, were participants can interact. In their view brainstormed ideas are merged or deleted in a visual way, supported by a facilitator, during a group discussion (Deshpande, et al., 2005). In a next step (causal) relationships

Supporting convergence in groups

Overview of articles

115

between the concepts are indentified. This approach has similarities with the causal mapping approach.

Dennis and colleagues (2008; 1999) propose Media Synchronicity Theory. This theory makes a relationship between the type of media and the task faced by a group. For convergence tasks, high synchronicity is needed. Five media capabilities are mentioned: immediacy of feedback, parallelism, symbol variety, reprocess ability and rehearse ability (Dennis, et al., 2008; Dennis & Valacich, 1999).

Dean et al. (2000) acknowledge the difficulty of reducing and clarifying in collaborative modelling tasks. They showed that facilitation can make the difference (Dean, et al., 2000). Another example of a time consuming convergence process is described by Catledge and Potts (1996). Heninger et al. (2006) also acknowledge the difficulty of processing large amounts of information in GSS workshops (Heninger, et al., 2006). They have found evidence that dual-task interference (or attention blocking) lies at the root of this problem.

Chen et al. (2007) describe a GDSS called TeamSpirit, amongst others this system allows users to add (brainstorm) ideas. The researchers acknowledge the problem of reducing and clarifying the list of ideas. The term they use for these processes is consolidation. Since their tool is aimed at distributed (geographically) use, they see the consolidation task specifically for the facilitator. The facilitator is supported in this task with a search function, for more easy identification of similar ideas (Chen, et al., 2007).

A series of experiments in Finland using GSS and thinkLets described by Bragge and colleagues (2007) show how successful the collaboration engineering approach can be (Bragge, et al., 2007). The goal of the workshops they describe was strategy building. After a first brainstorming step (FreeBrainstorm), convergence using the FastFocus thinkLet followed. No problems were reported. Another study by Bragge and colleagues focused on collecting feedback for a software system using GSS and thinkLets. Divergence occurred either via a PrivateFreeBrainstorm (NGT) or via a FreeBrainstorm thinkLet, convergence of the list of feedback was accomplished using the FastFocus (or PrivateFastFocus) thinkLet. Using FastFocus the original list of 235 ideas was reduced to only 36 that were selected for further elaboration based on prioritizing using the BroomWagon thinkLet. The participant perception on the process was positive, unfortunately the paper does not mention the time it took to reduce the list of ideas (Bragge, Merisalo-Rantanen, et al., 2005). The authors were contacted for further information and details.

Adkins et al. (2003; 2004) elaborate on The Skilled Facilitator approach and propose modifications to enable distributed facilitation for large groups (Adkins, Burgoon, & Nunamaker Jr., 2003; Adkins, Kruse, & Younger, 2004). They also give a list of tool requirements to enable a large group to collaborate.

Helquist et al. (2007) provide the concept of a participant-driven GSS. The system is intended for asynchronous and distributed collaboration. For convergence all participants go through a number of steps independently. The steps for convergence are; (1) evaluate input, (2) correct input, (3) combine redundancies, (4) cluster ideas in to threads, (5) name and rename threads, (6) summarize threads. The system is still in a conceptual stage, therefore no experiences are available yet.

Supporting convergence in groups

Interview with Dr. H. Verheul

117

Appendix C Interview with Dr. H. Verheul, transcript In this appendix the interview that was conducted with Dr. Verheul is described. Dr. Verheul read this abstract an approved it on September 29 by email, after having suggested a minor change. In the main text of this thesis referral will be made to this abstract.

28 September 2009, Stenden University Leeuwarden, 12.30 – 13.45

The interview with Dr. H. Verheul was semi structured. The goal was to obtain his way of facilitating a convergent collaboration process in a GSS supported workshop and to obtain his experiences. Another goal was to extract his reaction on the FocusBuilder thinkLet.

Dr. Hugo Verheul, born in 1969, earned his doctorate at Delft University of Technology in 1999 with a research dissertation on the role of social networks in the dissemination of cleaner technologies to small and medium-sized companies. As an engineer he worked with a particular interest in the concerns of corporations. His Master of Science in Philosophy of Science, Technology and Society is from the University of Twente. He currently works as director academic affairs at Stenden University in Leeuwarden. Dr. H. Verheul also is an experienced facilitator and uses GSS in about 30% of the workshops that he facilitates. Most of the workshops he facilitates are in a multi-actor environment in the policy development domain. The groups he works with are most of the time newly formed teams, put together specially for the topic of the workshop.

Dr. H. Verheul recognizes the convergence step of a meeting as being a difficult and time consuming one. Most of the times, this step is to be executed under time pressure and the step has to be executed at the beginning of the workshop. In this phase the group is still in the process of getting familiar with each other. The results of the convergence activity form the fundament and basis for the remainder of the workshop. Dr. H. Verheul also mentions the convergent activity as an opportunity for him, as facilitator, to build trust between him and the participants. The method he uses for convergence is, also because of the trust development aspect, to go through the entire list containing the brainstorm output in a plenary way. His technical assistant then writes down all concepts, that the group deems worthy of paying further attention to, in a new list. In this process he tries to invoke feedback from the group. In this way he also uses the convergence activity as a way to develop the relations within the group and to show the group his expertise and trustworthiness. An issue of possible participants distrust in the GSS is also tackled by this approach.

Regarding the FocusBuilder thinkLet Dr. H. Verheul saw the following weaknesses / hurdles. Especially in newly formed groups with different interests and agenda’s the gain in speed because of working in parallel could vanish because of the extra instruction needed to start up the participants. Also participants could come up with questions during the first round of (individual) convergence of a sub-set of the brainstormed concepts. Another risk is that the participants could have the feeling that they are doing the work themselves that (in their opinion) should be done by the facilitator. Dr. H. Verheul warned that an extra round of divergence could occur when the two final lists are combined in the plenary part of the process. He concluded that the success of this approach mainly depends on the group in which it is executed. Groups that are working together more often and know each other could possibly benefit from this thinkLet. An example of such a group could be a certain (project)team within one organization. Groups that are newly formed and that do not have a clear common goal (and possibly different agendas) will probably not benefit from this thinkLet.

Supporting convergence in groups

Interview with Dr. H. Verheul

118

To reduce the convergence task’s workload for the facilitator Dr. H. Verheul proposed to allow the participants to add comments and make remarks to the brainstormed items from other participants. In his opinion it would also be beneficial if the participants were allowed to have a look at the list of brainstormed items and were given the opportunity to change / correct their own input before the convergence activity starts.

According to Dr. H. Verheul the end result of effective convergence is a set of concepts that the group can continue working with and a set of concepts that has the right level of abstraction with respect to the group goal. A by-product of effective convergence is focus and trust within the group.

Supporting convergence in groups

Interview with Dipl. Ing. M. Wilde

119

Appendix D Interview with Dipl. Ing. M.Wilde The interview with mister Wilde was conducted via email on the first of October 2009. Below is a copy of the answers and questions.

What is your feeling about convergence processes (difficult, fun to do, etc)?

As a facilitator I feel fine with this step, as people are usually very engaged to converge their brainstormed arguments. They like to do this very beneficial and sense creating step of work, so do I. Sometimes it can be difficult, if the granularity, goal or clarity of the task is not shared within the group. This then has to be solved first. Once a convergence started and people join in it is really fun.

What methods do you use to converge on a list of concepts?, is this method developed by you?

Most of the time we use thinkLets, or to be honest slight variations of thinkLets, because we often reduce the real spectrum of the thinkLets to what we really need for the task or in that context. In some cases we just use the functionality of the tools. (Statistical clustering plus finalisation discussion, or ranking and and evaluation/discussion of the statistics).

Do you use any tools besides the GSS?

Not for the convergence. We would have the possibility to use other non-electronic facilitation methods, but we don’t use other tools for convergence.

What is effective convergence? How would you characterize the output of an effective / successful convergence activity?

A successful convergence would collaboratively turn a whole set of gathered ideas in a collection of meaningful subsets, which allow to work with towards the goal of the session.

How do you know that convergence was effective?

In the context of the session I would say the convergence was successful if we were able to commence work towards the goal without leaving somebody or something behind. We achieved our goal of convergence and made another step towards the goal of our task.

How long is a convergence process supposed to take? How do you define that (# of participants, amount of time spend on brainstorming)?

A workshop with 10 people, producing about 150 ideas in 20min, should be followed by a convergence of at least 30min. The main issue for me would be the # of participants, because the convergence has to be discussed afterwards.

Do you have any other comments / suggestions on convergence processes?

Maybe just one thing. What is the influence of transparency of convergence to the commitment of participants? Would it make sense to put more focus on the transparency, completeness and traceability of the activity, to gain more buy in from participants?

Reaction on FocusBuilder thinkLet:

Can you identify any strengths of this process?

Supporting convergence in groups

Interview with Dipl. Ing. M. Wilde

120

A strength of this process for me is the quality filter you will get in small groups of two or four people. They will produce quite sound summaries.

Can you identify any weaknesses of this process?

A weakness could be that the original sense / spirit / meaning / originaility could be lost in one of the steps, if the participant who produces the summary does not understand the meaning of it properly. (Chinese whispers problem). You will receive high quality ideas at the end, but people might not find their proposals!

In your opinion, will the process benefit from parallel working? If so, in what way?

In the sense of time saving.

Do you think this process will be easily accepted by participants? Why?

People might complain that they could not influence the merging and summarising of their own ideas. If their ideas are still under their control it might be easier.

Do you think it will be difficult or easy to facilitate this process? For what reasons?

it could be easy for very open topics which have the goal to rise creative or new ideas. That means people are not complaining if their original ideas has been changed or disregarded. If people have this feeling it might be a nightmare to facilitate this. (You might have to stop the process because people do not feel comfortable with it?)

Do you have any other comments / suggestions on this process?

If people are not allowed to disregard any of the items, that means they have to try to keep all proposals it might avoid the before mentioned worries.

Supporting convergence in groups

Interview with Dr. Gert-Jan de Vreede

121

Appendix E Interview with Dr. Gert-Jan de Vreede Dr. Gert-Jan de Vreede is the Kayser Distinguished Professor at the Department of Information Systems & Quantitative Analysis at the University of Nebraska at Omaha where he is the Managing Director of the Center for Collaboration Science. He is also affiliated with the Faculty of Technology, Policy and Management of Delft University of Technology in the Netherlands from where he received his PhD. He has been a visiting professor at the University of Arizona and the University of Pretoria (University of Nebraska at Omaha, 2009).

His research focuses on field applications of e-collaboration technologies, the theoretical foundations of (e)-collaboration, Collaboration Engineering, and the facilitation of group meetings. He was named the most productive Group Support Systems researcher world-wide from 2000-2005 in a comprehensive research profiling study Bragge et al. (2007). His research has been published in various journals, including Journal of Management Information Systems, Journal of the AIS, Small Group Research, Communications of the ACM, DataBase, Group Decision and Negotiation, International Journal of e-Collaboration, Journal of Decision Systems, Journal of Creativity and Innovation Management, Simulation & Gaming, Simulation, and Journal of Simulation Practice and Theory (University of Nebraska at Omaha, 2009).

The two paragraphs above have been copied from the website of University of Nebraska at Omaha (University of Nebraska at Omaha, 2009)

An opportunity was offered to have a short conversation with Dr. Gert-Jan de Vreede. The conclusions and highlights of the one-hour meeting will be listed in this appendix.

Dr. de Vreede agreed that using parallelism (as offered by GSS’s) was a good way to speed up the convergence process.

Dr. de Vreede also showed a possible design of a convergence process that he had thought of. In this design the participants were first asked to rate all concepts on a simple scale. For instance 5-points, ranging from non-critical (not important) to critical (important). Then the system is able to select the concepts that are most critical to the participants. The system could use the rating and its standard deviation for this purpose. The system then also creates subsets from this selection. The subsets are presented to groups (3-4) of participants. The participants are asked to clean up and summarize this subset.

Two axes can be defined about the way in which a convergence activity can be organized. One ranges from ‘in parallel’ to ‘plenary’, the other ranges from ‘making and combining selections’ to ‘emergent discussion’.

Supporting convergence in groups

Interview with GroupSupport

123

Appendix F Interview with Danny van den Boom and Michel van Eekhout The 3-hour interview was held at the Group Support office in Eindhoven.

“GroupSupport is a consulting company and technology provider to help organizations think better, create better, and decide better, by engaging and empowering all organizational resources through efficient group processes. We use leading collaborative technology such as GroupSystems together with brainstorm and consensus building methods enabling groups to deliver more in less time. GroupSupport is your partner in collaboration services varying from single meetings to full scale projects for all profit and non-profit organizations, local and global. GroupSupport is the knowledge centre for GroupSystems technology in the Benelux and Germany.” Copied from the GroupSupport (2009) website.

Main conclusions;

• What is efficient convergence depends on client’s demand / task and available resources.

• When the brainstorm artefact is very large, quick selection using a rating could be beneficial.

• Giving the opportunity for reformulation between brainstorming and convergence could be beneficial.

• The dissemination of GSS depends on price, availability, complexity (of facilitation and system interface) and the needs for knowledge exchange.

• During convergence (and brainstorming) the level of abstraction can be guided using examples.

• Looking at the formulation of concepts (e.g. ‘I’ in front of the sentence) can help in determining the time needed for and complexity of the convergence activity.

Convergence is difficult and time consuming. ‘perfect’ convergence (no redundancies, all concepts understood) occurs in general in 30 to 60 minutes, independent from the group size.

Removing redundancy: first categorizing concepts, all concepts within a category have a relationship with each other. This facilitates removing redundancies and lowers cognitive load. The way to remove redundancies is by making pair wise comparisons in a plenary way. Therefore categorizing helps to save time and lowers cognitive load. For instance removing redundancies from a bs artefact of 75 is very hard, with a bs artefact of 25 this is more easy.

GS uses parallel subgroups to remove redundancies from subsets of the brainstorm artefact. The subgroups present their findings to each other or it is printed and handed over. In the case of presentations GS lets the other participants comment on the presentations in a TT brainstorming module.

GS creates subsets of the bs artefact based on different topics.

Reaction to FB:

Similar to FastFocus. Preferably removing redundancies and grouping similar concepts is done automatically. Grouping based on meaning. After having worked in subgroups fostering the creation of shared understanding among all participants is of great importance.

In most commercial workshops, time is a limiting factor and influences (determines) the result of the workshop. Often there is no time to let participants work in subgroups to reduce a subset of

Supporting convergence in groups

Interview with GroupSupport

124

brainstormed ideas. Therefore often only a selection (by means of voting) is made on the raw brainstorm artefact, which is then checked by the facilitator for ‘lost’ redundant items. In a workshop convergence often is the first candidate to gain time for other aspects of the workshop. When GS needs to gain time, they converge by reducing the bs artefact to 1/3 of its original size by means of voting. It is advisable to include an extra security step when working in this way: always ask the participants whether all instrumental concepts are present in the selected set of concepts. This may lead to good concepts but always improves the trust that no good ideas are ‘thrown away’.

Voting results in a GSS appear like ‘hard’ numbers, but in fact are not.

Effective convergence: the GS definition: ‘goal achieved’ when the costumer is satisfied. In some cases even the opdracht gever determines in what way convergence is going to be achieved, sometimes even disregarding the advice from the collaboration engineers / experts. Reasons for this are political pressure, the power of ‘habits’ or the lay-out / structure of the organization.

The number of concepts and the group size are leading in determining the way of convergence that is most suited.

All plenary activities (discussions / presentations / going through a list of concepts) foster the creation of shared understanding and give the facilitator the possibility to remove redundancies.

A way to reduce the need for convergence is limiting the input during the brainstorming phase. For instance by asking participants for 1 or 2 ideas, instead of unlimited.

In some workshops the first two minutes of brainstorming can be thrown away.

The process should always match with the participants, in all possible ways.

Performance indicators: how to measure success: sometimes speed, sometimes reduction level, sometimes level of shared understanding.

Level of comprehensiveness is important. And it is important, ‘not to lose’ participtants (iedereen aan boord houden).

Independent of way of convergence, keep the message and messenger uncoupled. Safeguarding anonymity.

Convergence should happen on content level. Try to minimize the influence of hierarchy etc.

To steer the level of abstraction (level of refinement): give examples in advance of the brainstorm.

Depends fully on the goal of the workshop.

Supporting convergence in groups

Interview with Prof. Dr. Ing. J.B.F. Mulder

125

Appendix G Interview prof.dr.ing. J.B.F. Mulder By email, 1 October 2009

Prof.dr.ing. J.B.F. (Hans) Mulder, MBA is the managing director of Venture Informatisering Adviesgroep N.V. (VIAgroep), professor at the University of Antwerp, Faculty of Applied Economics, Management Information Systems Department and is a lecturer at the Utrecht University of Professional Education (Master of Informatics) and Police Academy. He received his Phd. at Delft University of Technology(Information Systems department), his Master of Science in Business Administration (MSc.BA) at Nijenrode Business Universiteit in 1994 and his Bachelor of Science in ICT at The Hague University of Professional Education (Informatics Sector) in 1993 (ViaGroep N.V., 2009).

In 1996, Hans founded Essential Action Engineers B.V., a company which designs organizations by means of Rapid Enterprise Design. From 1998 onward, he has represented the Group Decision Support System, Meetingworks, in Europe (ViaGroep N.V., 2009).

In addition, as an IT expert, he is frequently involved in matters of dispute settlement in the field of IT, participating in arbitration, mediation, binding opinions, and expert reports. Where necessary, he also functions as an expert providing information to district courts. He is a certified NMI mediator, and a qualified Expert of the Court (Gerechtelijk Deskundige). Since 1996, he has been involved in over 80 cases of arbitration, mediation, binding opinion, and expert reports. Furthermore, he has published more than 40 articles in specialist journals and international magazines, and is the author of the books Rapid Enterprise Design and Eenvoud in Complexiteit (ViaGroep N.V., 2009).

The resume of prof.dr.ing. J.B.F. Mulder was copied from the website of the organization he works for (ViaGroep N.V., 2009).

General convergence questions:

What is your feeling about convergence processes (difficult, fun to do, etc)?

Difficult but fun

What methods do you use to converge on a list of concepts?, is this method developed by you?

We used the converge methodology of the university of Washington, prof. floyd Lewis, supported by Meetingworks Inc.

Do you use any tools besides the GSS?

Yes business modeling sofware, office2007

What is effective convergence? How would you characterize the output of an effective / successful convergence activity?

Consensus on process and outcome, validated by predefined criteria

How do you know that convergence was effective?

Measurement on dimensions of process and outcome

Supporting convergence in groups

Interview with Prof. Dr. Ing. J.B.F. Mulder

126

How long is a convergence process supposed to take? How do you define that (# of participants, amount of time spend on brainstorming)?

Convergence process consist of agendaplanning phase

Reaction on FocusBuilder thinkLet:

Can you identify any stregths of this process?\

Divide and conquer, strenght when topics are loosely connected.

Can you identify any weaknesses of this process?

no integral problemsolving

In your opinion, will the process benefit from parallel working? If so, in what way?

Depends on type of problem

Do you think this process will be easily accepted by participants?

depends

Do you think it will be difficult or easy to facilitate this process?

Easy

Supporting convergence in groups

Interview with Jan Lelie

127

Appendix H Interview Jan Lelie By email, 1 October 2009

What is your feeling about convergence processes (difficult, fun to do, etc)?

The convergence processes are the key to successful group decision making. I find it both difficult and fun, challenging and interesting. The convergence processes are also important to keep rhythm and structure in a meeting. A meeting starts with a convergence: people coming in so I start there with greeting them, having them introduced (being ‘heared’) and establishing the shared expectations of the meeting. (This step is not negotiable, because I assume that when people have unclear expectations, the change of a dissatisfied meeting is higher. Having no expectations is clear and also OK. )

The third convergence is on the issue under investigation and prioritizing the aspects. The fourth is on (alternative / possible) solutions, the fifth on an Action Plan. The meeting usually ends with a date for further meetings about the process of implementing the actions.

What methods do you use to converge on a list of concepts?, is this method developed by you?

I have several, depending on the context of the issue, the questions, the time available and the goals of the meeting. They range from a ‘free’ clustering, through mind mapping, SWOT, time lines, action planning. I almost always use cards or labels with the ideas or item written or printed on them. My GSS has been designed to print the results on cards and them cluster them. Sometimes, 1 in 30, perhaps, I converge by voting on ideas.

Do you use any tools besides the GSS?

What do you mean by ‘tool’? I use picture cards, drawings, matrices, sometimes I use chairs, objects in the meeting space, clay, Lego ™, …

What is effective convergence? How would you characterize the output of an effective / successful convergence activity?

My definition of an effective meeting is one that where the agreed actions have been executed. I also have clear deliverables from a meeting (like the 5 potentially most successful action, or two ideas for further research, or ..) . The output of a successful convergence is a short list with sequenced or prioritized ideas or items.

How do you know that convergence was effective?

I always reach the deliverables, but sometimes participants or problem owners are not happy with the results. The convergence was effective when people are motivated to take action AND take action. The latter, however, is not my responsibility.

How long is a convergence process supposed to take? How do you define that (# of participants, amount of time spend on brainstorming)?

20 minutes is about the optimal; longer doesn’t lead to better results, shorter can be too short.

Supporting convergence in groups

Interview with Jan Lelie

128

Can you identify any strengths of this process?

Systematic, logical

Can you identify any weaknesses of this process?

Based on Helsinki-principle (“Any meaningful exchange of utterances depends upon the prior existence of an agreed upon set of semantic and syntactic rules.” The recipients of the utterances must use only these rules to interpret the received utterances, if it is to mean the same as that which was meant by the utterer”) which is flawed because:

a. the rules of interpreting messages are never the same b. the rules are never checked c. it disregards the pragmatic rules of conversations. - ideas are never redundant: the represent different aspect of what is being

communicated. What is redundant for one person, might be an eye-opener for another. - there is no ‘social’ interaction between participants, which is needed to express and

interpret ideas and utterances. People want – and need – to be ‘in touch’. - Subtext and context are as important as text itself - It is no fun, perhaps even stressful for some people. Requires concentration over a

relative long time. - Repetitive

In your opinion, will the process benefit from parallel working? If so, in what way?

Will the people benefit from parallel working? I think it would be better if people could collaborate in small groups of about 5. Talking and sharing.

Do you think this process will be easily accepted by participants?

People do anything if you ask them politely. There is also the issue of hierarchy: they’ll do it because the boss wants it.

Do you think it will be difficult or easy to facilitate this process?

I cannot do it. Not because it is difficult or easy, it is just not my style. Also, I design session based on congruence: a session should be congruent with the intended outcome (or else it creates ‘noise’ or resistance or ambiguity). If the outcome of the session must be that people are able to make lists on the computer; it would be fine with me. It might be used when dealing with lists of simple tasks which need arranging. Couldn’t use it on complex issues.

Do you have an alternative?

Hmmm, It would also be interesting to let people make an arrangement of the ideas: add an arrow ( ) to every idea and ask them to distribute the ideas on a square, round or hexagonal field (projected on a big (touch) screen or arranging using a mouse). Talk about it: which ideas point in different directions? Which in the same? Where is a bridge or where is a barricade? Make snapshots of results during conversations. Add notes.

Detailed overview of thinkLets for convergence

129

Appendix I Detailed overview of thinkLets for convergence The table below gives a detailed overview of all thinkLets found that enable a reduce and/or clarify pattern of collaboration. The table lists the thinkLet name, the pattern(s) of collaboration that the thinkLet achieves, 2 characteristics of the output, general remarks and the elements of the thinkLet. The characteristics of the thinkLet output that are listed are whether the thinkLet reduces to a list or not and whether the thinkLet removes redundancy & ambiguity from the input artefact. When a thinkLet consists of more than one elements or activities, these are mentioned in chronological order in the column on the right of the table.

# ThinkLet Pattern Description Output Remarks Elements

Sele

ctin

g

Abs

trac

ting

Sum

mar

izin

g

Clar

ifyin

g

Redu

ces

to

Rem

ove

redu

ndan

cy

&

ambi

guity

? Elements of the thinkLet (in chronological order)

1 BroomWagon X

Brainstorming ideas are selected in order to identify the ones that are worthy of further attention (Davis, et al., 2007). This is done by allowing the participants to place a limited number of checkmarks (den Hengst, et al., 2004).

List N

The presentation part is not explicitly mentioned in the description

Selecting: checkmarks (in parallel) Presentation of the results (plenary)

2 GarlicSqueezer X

The facilitator works with assistant to condense the list of brainstorming ideas by selecting contributions that represent the highlights. Each person starts at a different end of the list and works to the middle so that all but the key ideas are squeezed out (Davis, et al., 2007).

List Y

The presentation part is not explicitly mentioned in the description

Selecting by third party (asynchronous) Presenting the results to the participants (plenary)

3 GoldMiner X

Participants view a page containing a collection of ideas, perhaps from an earlier brainstorming activity. They work in parallel, moving the ideas they deem most worthy of more attention from the original page to another page (Briggs & de Vreede, 2009).

List N

The presentation part is not explicitly mentioned in the description

Selecting: moving concepts (in parallel) Presentation of the results (plenary)

4 Reporter X Allowing participants to explain concepts and soliciting input from other participants (Fruhling, et al., 2007).

List N

Only found in the publication of Fruhling, Steinhauser, Hoff and Dunbar (2007).

Reviewing & commenting on brainstorm artefact (plenary)

Supporting convergence in groups

Detailed overview of thinkLets for convergence

130

Very similar to the first part of ReviewReflect Does not lead to reduction

5 ReviewReflect X X

The group reviews and comments on the existing content first, in a parallel way. Next, the group discusses the restructuring and rewording of the content in a moderated discussion (Davis, et al., 2007; Noor, et al., 2008).

List Y Similar to Reporter with discussion element

Reviewing & commenting on brainstorm artefact (in parallel) Guided discussion on restructuring of brainstorm artefact (plenary)

6 ExpertChoice X X An expert is selected to condense and summarize a set of ideas and presents the finalized set to the entire team (Davis, et al., 2007).

List Y

Selecting: by third party (asynchronous) Presenting the results to the participants (plenary)

7 BucketSummary X X Purpose according to Nabunkenya, van Bommel and Proper (2008): ‘To remove redundancy and ambiguity from broad generated items’.

List Y

More information on the thinkLet not found, therefore unable to extract elements.

8 DimSum X X

Individual members generate candidate statements. Group members identify words and phrases that they like from those statements. Group and facilitator work together to draft a statement from selected words and phrases. If wordsmithing breaks out, process is repeated with current draft as a starting point (Briggs, et al., 2008; Davis, et al., 2007).

One concept

Y

Selecting: one concept each (in parallel) Guided discussion on the selected concepts (plenary) Reformulation into one concept (plenary)

9 Pin the tail on the Donkey

X X

Group members browse a collection of ideas, often from a Brainstorming session. Group members place a mark by the ideas that they want to continue focus everyone’s attention on. Marked ideas are discussed in a plenary activity (Briggs, et al., 2003; Davis, et al., 2007)

List N = BroomWagon + discussion element

Selecting: checkmarks (in parallel) Guided discussion on the marked concepts (plenary)

10 BucketBriefing X X X Categories with ideas are assigned to subgroups and the subgroups clean up the ideas before

list Y Input artefact should contain categories or

Selecting: qualitatively (in parallel)

Detailed overview of thinkLets for convergence

131

reporting back to the entire group (Davis, et al., 2007; Fruhling, et al., 2007) .

concepts should be divided into subgroups. Similar to FocusBuilder

Presenting to other subgroups (plenary)

11 FastHarvest X X X

Participants form subgroups that are responsible for a particular aspect or category that relates to the brainstorm ideas. Taking a subset of all brainstorm ideas at a time, each subgroup extracts concise and clear versions of ideas that relate to their aspect or category. Every time the subgroup is done with a subset of ideas, they process another subset until they have considered all brainstorming ideas. When all subgroups are done, each subgroup presents their findings to the whole group and clarifies the meaning (not merit) of their extractions if necessary (Davis, et al., 2007).

List Y The subgroups have to go through all concepts

Selecting: qualitatively (in subgroups) each subgroup focuses on a particular aspect or category. Presenting to other subgroups (plenary)

12 FastFocus X X X

Each participant browses a subset of brainstorming ideas. Participants take turns proposing an idea from the collection to be added to a public list of ideas deemed worthy of further consideration. Group discusses meaning, but not merits of the proposed idea. Facilitator add concise, clear version of the idea to the public list (Davis, et al., 2007).

List

Y Very popular

Selecting: qualitatively (plenary, based on perceived instrumentality) Guided discussion on the selected concepts (plenary) Reformulation of the concepts (plenary)

13 FocusBuilder X X X

All brainstorm ideas are divided into as many subsets as there are participants. Each participant receives a subset of brainstorm ideas and is tasked to extract the critical ideas. Extracted ideas have to be formulated in a clear and concise manner. Participants are then paired and asked to share and combine their extracted ideas into a new list of concise, non-redundant ideas. If necessary, the formulation of ideas is improved, i.e. the pairs focus on meaning, not merit. Next, pairs of participants work together to combine their two lists into a new list of concise, non-redundant ideas. Again, the formulation of ideas is improved

List Y

Selecting: qualitatively (in subgroups) Combining subgroups Combining subgroups Presenting the results (plenary)

Supporting convergence in groups

Detailed overview of thinkLets for convergence

132

if necessary. The pairing of lists continues until there are two subgroups that present their results to each other. If necessary, formulations are further improved. Finally, the two lists are combined into a single list of non-redundant ideas (Davis, et al., 2007).

14 OneUp X X X

Group browses a collection of brainstorming ideas. First participant adds an idea to the public list. For each subsequent addition, the proposer argues why the new idea is better than those already on the list. Facilitator writes a concise, clear version of the idea on the public list. Facilitator also keeps a list of the criteria used by the participants (Davis, et al., 2007; den Hengst, et al., 2004)

List

Y

List of used selection criteria as by-product. Each selected concept has to score better on one of the criteria than the previous ones in order to be selected Similar to FastFocus with condition.

Selecting: qualitatively & conditionally (plenary, based on a set of criteria) Guided discussion on the selected concepts and criteria (plenary) Reformulation into one concept (plenary)

Detailed overview of IAF methods for convergence

133

Appendix J Detailed overview of IAF methods for convergence The table below gives a detailed overview of all IAF methods found that enable a reduce and/or clarify pattern of collaboration. The table lists the IAF method name, the pattern(s) of collaboration that the method achieves, 2 characteristics of the output, general remarks and the elements of the method. The characteristics of the method’s output that are listed are whether the method reduces to a list or not and whether the method removes redundancy & ambiguity from the input artefact. When a method consists of more than one elements or activities, these are mentioned in chronological order in the column on the right of the table.

# IAF method name Pattern Description Output Remarks Elements

Se

lect

ing

Abs

trac

ting

Sum

mar

izin

g

clar

ifyin

g

Redu

ces

to

Rem

ove

redu

ndan

cy &

am

bigu

ity?

Elements of the method in chronological order

1 2x2 Value Matrix X Evaluating and selecting a concept from a list, based on the score on 2 criteria.2 List

N

Selecting: quantitatively (in parallel)

2 3x3 Value Matrix X Evaluating and selecting a concept from a list, based on the score on 3 criteria.3 List

N

Selecting: quantitatively (in parallel)

3 Affinity Diagram X X Collaborative clustering of concepts and finding general heading for the clusters.4

List (clustered)

N Organizing pattern of collaboration

Organizing: plenary discussion

4 Ballooning method

X X All concepts are written on balloons. By bursting balloons a selection is made.5 List

N

Selecting: deleting (plenary)

5 Build up X X X

After generating a list of alternatives, someone is asked to name one that might work. 2. The facilitator asks: ‘Is there anyone who could NOT live with this one?’ 3. If there is anyone who cannot, ask for changes that could help everyone live with it.6

List

Y Selecting: qualitatively (plenary)

6 Clustering in X X All ideas are clustered into columns, List (clustered) N Organizing pattern Organizing: plenary

2 http://www.imaginal.nl/toolweek3.htm 3 http://www.imaginal.nl/Model3x3Matrix.htm 4 http://www.affinitymc.com/affinity-diagram.pdf 5 http://www.jpb.com 6 http://ica-associates.ca/downloads/MeetingTools.pdf

Supporting convergence in groups

Detailed overview of IAF methods for convergence

134

columns method after that the columns are given titles. All columns have a symbol. When clustering is finished, the symbols are replaced by a name. 7

of collaboration

discussion

7 Symbol gestalt method

X X

Similar to the ‘clustering in columns’ method. Only now the symbols are added to the concepts instead of vice versa. 8

List (clustered)

N Organizing pattern of collaboration

Organizing: plenary discussion

8 Delphi method X X Anonymous experts summarize and remove redundancy from a list of brainstormed concepts.9

List

Y

Selecting: experts (third party) Cleaning up (third party) Presenting results (plenary)

9 Discerning priorities with colours

X

All participants can state their preference, based on multiple criteria, on a list of concepts. This is done by sticking coloured dots to concepts. Every dot has a different colour and represents a different criteria. The participants are given a limited number of dots per colour.10

List

N Presentation part not mentioned explicitly

Selecting: quantitatively (in parallel) Presenting results (plenary)

10 Evaluation by values

X X

Collaboratively assign values (based on predefined criteria) to a set of concepts. Based on the evaluation a number of concepts can be selected.11

List

N Presentation part not mentioned explicitly

Selecting: quantitatively (plenary) Presenting results (plenary)

11 Gallery tour or walk

X X X

Concepts are brainstormed in sub groups; each group presents their concepts on a separate screen or whiteboard. The groups visit each other

List Y Discussion not mentioned explicitly

Selecting: qualitatively, in sub groups (in parallel) Presenting / discussing

7 http://www.iaf-methods.org/node/5225 http://www.imaginal.nl

8 http://www.iaf-methods.org/node/5227 http://www.imaginal.nl 9 http://en.wikipedia.org/wiki/Delphi_method 10 http://ica-associates.ca http://www.iaf-methods.org/node/5102

11 http://www.imaginal.nl/ModelValueMatrix.htm

Detailed overview of IAF methods for convergence

135

and the concepts are presented. As many rounds as there are sub groups are needed.12

results (in parallel)

Discussion on results (plenary)

12 Multi voting X Same as ‘Discerning priorities with colours’.13 List

N

Presentation part not mentioned explicitly

Selecting: quantitatively (in parallel) Presenting results (plenary)

13 Nominal Group Technique

X Divergence method where clarification of concepts is stimulated by the group in the form of questions.14

List

N Fosters the creation of shared understanding

Questioning the brainstorm artefact (plenary)

14 Paired Comparisons

X Making pair wise comparisons between a limited set of concepts to select the most suitable one.15

List

N Selecting: pair wise comparisons (plenary)

15 Pin Cards X X

Not anonymous way of brainstorming concepts, followed by sorting the concepts in columns to group them and remove redundancy.16

List

Y Also enables the organizing pattern of collaboration

Sorting concepts (plenary) Removing redundancy (plenary)

16 Plus, Minus, Interesting

X X

Describing and eventually selecting a concept from a large list by collaboratively describing the plus, minus and interesting points per concept.17

List

Y Selecting: qualitatively (plenary)

17 Polar Gestalt method

X X X

Remove redundancy by mapping a set of concepts on a wall etc. the concepts are brainstormed on post it’s and the participants place them on a wall to show relations and similarities. In the

List Y Also enables the organizing pattern of collaboration

Selecting: moving (parallel) Discussion to remove redundancy (plenary)

12 http://ica-associates.ca/downloads/MeetingTools.pdf

http://www.iaf-methods.org/node/5309 13 http://www.iaf-methods.org/node/5088 14 http://www.iaf-methods.org/node/5430 15 http://www.iaf-methods.org/node/5089 16 http://www.mycoted.com/Pin_Cards

17 http://www.iaf-methods.org/node/5091

Supporting convergence in groups

Detailed overview of IAF methods for convergence

136

end all concepts that are grouped together are given one name.18

18

The Hundred Dollar Test

X To quickly select between a large set of concepts, let all participants allocate 100 dollars between the concepts.19

List

N Selecting: quantitatively

19 TRIZ X X X Large set of principles to guide a group. License needed.20 unknown

unknown

Unable to find further information

Unable to assess

18 http://www.iaf-methods.org/node/5226

19 http://creatingminds.org/tools/hundred_dollar.htm 20 http://www.aitriz.org

Supporting convergence in groups

Scrum backlog for Divide&Conquer and FocusBuilder thinkLet

137

Appendix K Scrum backlog for Divide&Conquer and FocusBuilder thinkLets This section describes the needed functionalities needed to execute the Divide&Conquer thinkLet from the perspective of the facilitator and from the perspective of the participant. The backlog is specified for the development of the functionality within the existing TeamSupport platform. Therefore some general architecture issues were not included.

Backlog Divide&Conquer thinkLet As a facilitator, I want to:

1. Allow the participant of my workshop to quickly express their perceived instrumentality of a brainstormed concept, using a 5-point scale.

2. Be able to change the labels of the 5-point-scale. 3. Speed up the voting process by (initially) not asking all participants to express their opinion

on all concepts. Instead, only four votes per concept are collected, but in such a way that no participant is asked to vote on a concept that he contributed himself.

4. Achieve that the default voting score for all participants is 3. 5. Automatically classify the concepts into three categories after voting, based on the values for

the average voting score and standard deviation. 6. Be assisted by the GSS in determining whether there are enough participants and concepts

to start this way of voting. 7. Decide after the first round of voting, whether a second round of voting is needed. 8. Have an overview of how many concepts there are in the three categories after the first and

second round of voting. 9. Be able to present the categorized concepts to the participants, after the first round of

voting, as a basis for discussion. But only when I decided not to have a second round of voting.

10. Regroup (change the classification) concepts while presenting them to the participants. 11. Edit (change the text) concepts while presenting them to the participants. 12. Be able to issue a second round of voting, directly after the first one. But in such a way that

the participants only vote on category 3 concepts that they did not vote on yet and are not their own concepts.

13. See, after each voting round, an overview of the concepts, average score, standard deviation, classification and the classification rules.

14. Be able to change the values for the average scores and standard deviations, based on which the GSS classifies the concepts.

15. Make sure that the participants vote on all concepts on their screen, skipping one is not allowed.

16. Be able to present an categorized overview of the concepts after the second round of voting. 17. Be able to store and later review a list of all original concepts. 18. Be able to store and later review an overview of the voting settings. 19. Be able to store and later review an overview of how the concepts where distributed among

the participants for voting, in both the first and second round. 20. Be able to store and later review an overview containing all individual voting results. 21. Be able to store and later review the time a voting round started and ended. 22. Be able to store and later see the times at which the individual participants submitted their

votes.

Supporting convergence in groups

Scrum backlog for Divide&Conquer and FocusBuilder thinkLet

138

As a participant, I want to:

23. Quickly express my vote on the 5-point scale. 24. See an image of the voting scale, including the labels for reference during voting. 25. Know how many concepts I have to vote on and why. 26. Know why I do not see my own contributed concepts in the voting list. 27. See the difference between the first and second round of voting. 28. Be informed that the rest of the participants are voting in the second round, but that I do not

have to participate in this round, because my vote is already captured. Only when needed (only when only his concepts where category 3 concepts).

29. See the overview of the categorized concepts, including a legend / explanation of the three categories.

30. Get a warning when I click vote that confirms casting the vote with me (‘are you sure...’). 31. See, after I have casted my vote, how many participants still have to cast their vote. 32. See the concepts and their classification after voting, during the discussion.

Backlog FocusBuilder thinkLet As a facilitator, I want to:

33. Allow the participants to work in parallel (in subgroups) on different parts of the brainstorm list.

34. Let the GSS divide the list of concepts randomly into four, more or less, equal parts. 35. Send each of the four parts of the list to a different workstation. 36. Select the workstations where the lists in round 1 are being sent to. 37. Select the workstations where the lists in round 2 are being sent to. 38. Receive back and see the converged versions of the lists that I have send to the participants. 39. Combine the four lists that I got back into two lists. 40. Send these two lists back to two different workstations, to allow the participants to combine

them. 41. Track what the four subgroups are doing in the first round, by seeing their screen in real

time, including the original list that they have started with. 42. Measure and see the 4 group’s progress during the first round. 43. Measure and see the 4 group’s level of convergence during the first round. 44. Measure and see the 2 group’s progress during the second round. 45. Measure and see the 2 group’s level of convergence during the second round. 46. Track what the two subgroups are doing in the second round, by being able to see their

screen in real time, including the two original lists that they have started with. 47. Receive back and see the two converged lists from the two subgroups of round two. 48. Be able to present the two lists from round 2 to all participants. 49. Be able to facilitate a discussion on how to combine these two lists in the final round. 50. Have facilities to combine concepts from the two lists of round two into one final list.

As a participant, I want to:

51. Work in a subgroup on a subset of the brainstorm list. I want to do this twice, in round 1 and in round 2.

52. See the list that my group has to work on, in both round 1 en round 2.

Supporting convergence in groups

Scrum backlog for Divide&Conquer and FocusBuilder thinkLet

139

53. Be able to move concepts from the list that the GSS gave us, to the list that my group is creating, in both round 1 en round 2.

54. Be able to edit and format the concepts in the list that my group is creating, in both round 1 en round 2

55. Send back our converged list to the facilitator, when my group has finished, in both round 1 en round 2.

56. See the combination of two converged lists, in order to combine them in round 2. 57. Be able to follow the discussion of the combination of the two final lists in the final round.

This applies to all participants.

Supporting convergence in groups

Example and explanation of Normalized Word Vectors

141

Appendix L Example and explanation of Normalized Word Vectors The example and the explanation below are entirely copied from Parker et al. (2008). For example, suppose we have a simple thesaurus with the following words assigned to one of 5 concept numbers: Concept Number Words 1. the, a 2. pretty, lovely, gorgeous 3. flower, bloom, blossom 4. red 5. yellow Next suppose we have the following successive sentence fragments from two separate documents: Document# Document Text (1) The pretty flower… A lovely bloom… (2) The red blossom… A yellow bloom… If we view the concept numbers as representing the axes of a five dimensional space, then the vectors for these two documents can be written by counting the number of times that a word associated with each concept number appears in the document fragments. In document 1 there are two words associated with concept 1 – ‘the’ in fragment 1 and ‘a’ in fragment 2. There are two words associated with concept 2 – ‘pretty’ in fragment 1, and ‘lovely’ in fragment 2. There are two words associated with concept 3 – ‘flower’ in fragment 1 and ‘bloom’ in fragment 2. In document 1 there are zero words in the fragments associated with concepts 4 and 5. Document 2 is assessed in a similar manner. The table below summarizes this analysis for both documents. Document# Vector Representation Explanation (1) [2, 2, 2, 0, 0] [the, a; pretty, lovely; flower, bloom; null; null] (2) [2, 0, 2, 1, 1] [the, a; null; blossom, bloom; red; yellow] Because graphical representations beyond three dimensions are difficult to produce the remainder of this discussion will consider only the first three dimensions for these documents. Thus, the first three dimensions give us: Document# Vector-first 3 concepts Explanation (1) [2, 2, 2] [the, a; pretty, lovely; flower, bloom] (2) [2, 0, 2] [the, a; null; blossom, bloom] Vectors 1 and 2 represent the documents and are instances of what are termed NWVs. Vector 1 represents a line from [0,0,0] through [2,2,2] and vector 2 a line from [0,0,0] through [2,0,2]. If we assume that document 1 is the exemplar document, then we can see how semantically close document 2 is to the exemplar document by looking at the closeness of their corresponding vectors. The angle between the exemplar document vector and the vector for document 2 is named theta. The angle between the vectors varies according to how "close" the vectors are. A

Supporting convergence in groups

Example and explanation of Normalized Word Vectors

142

small angle indicates that the documents contain similar content; a large angle indicates that they do not share much common content. It turns out in practice that the cosine of theta is generally a powerful predictor of document similarity. If the two documents above were identical in terms of the number of times each concept was mentioned, then the NWVs would be identical and they would appear as collinear vectors in the diagram with a cosine equal to 1. If the documents were completely different, the vectors would be orthogonal, and their cosine would be 0. For these and all other cases the cosine of the angle is calculated in the normal fashion.

Supporting convergence in groups

Using Normalized Word Vectors in a brainstorm artefact

143

Appendix M Using Normalized Word Vectors in a brainstorm artefact In this appendix detailed information is available about the application of Normalized Word Vectors in a brainstorm artefact. This process was executed in several iterations.

Calculating the angles between vectors:

Calculating the angles between all vectors is done in this experiment using Matlab. The formula used to calculate the angle is:

Cos Ѳ =

MATLAB was given the command below:

% read from csv file %M = importfile('vectors_csv.csv'); M = dlmread('itemMatrixgreenict.csv'); %A = dlmread('ph.dat', ';'); [H W] = size(M); % constants N = W; % number of vector elements (list of synonyms) L = H; % number of brainstorm concepts % initialize vectors and matrices S = zeros(L); % similarity matrix V = zeros(L,N); % vector matrix % build up matrix with combination to be displayed for i = 1:length(S) for j = 1:length(S) if j > i S(i,j) = 1; else S(i,j) = 0; end end end % build vector matrix % M = [[1 2 2] % [1 2 2] % [1 2 2] % [1 2 2] % [1 2 2]]; % calculate the right angles R = 0; k = 1; for i = 1:length(S) for j = 1:length(S) if S(i,j) == 1 vector1 = read_vector(M,i); vector2 = read_vector(M,j); R(k) = nwv_angle(vector1, vector2); k = k+1; else continue end end end % display the output in matrix format k = 1; for i = 1:length(S)

Supporting convergence in groups

Using Normalized Word Vectors in a brainstorm artefact

144

for j = 1:length(S) if S(i,j) == 1 S(i,j) = R(k); k = k+1; else continue end end end Matlab then returns all angles in degrees between the concepts that are specified as a vector.

For the first brainstorm artefact used, the manually calculated matrix of similarities is shown below, for the other three brainstorm artefacts the matrices look the same, only more detail is given by using 0, 1 and 2. These matrices can be found on de CD-ROM on the last page.

The calculated angles look like the figure below;

Concept 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 201 0,00 1,00 1,00 1,00 1,002 0,00 0,00 1,00 1,003 0,00 0,00 0,00 1,00 1,004 0,00 0,00 0,00 0,00 1,005 0,00 0,00 0,00 0,00 0,00 1,006 0,00 0,00 0,00 0,00 0,00 0,00 1,007 0,00 0,00 0,00 0,00 0,00 0,00 0,00 1,008 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,009 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 1,00

10 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 1,0011 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0012 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 1,0013 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0014 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0015 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0016 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0017 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0018 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0019 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,0020 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00

Concept 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 201 0,00 71,57 77,08 82,58 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 74,35 90,00 90,00 90,00 90,00 77,08 74,352 0,00 0,00 69,30 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 69,30 90,003 0,00 0,00 0,00 81,70 69,30 78,22 90,00 90,00 81,70 90,00 90,00 90,00 90,00 90,00 73,22 90,00 90,00 90,00 75,52 90,004 0,00 0,00 0,00 0,00 78,22 83,23 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 80,41 90,00 90,00 90,00 90,00 90,005 0,00 0,00 0,00 0,00 0,00 73,22 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 65,91 90,00 90,00 90,00 90,00 90,006 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 90,00 61,87 90,00 90,00 90,00 90,00 90,007 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 90,00 90,00 84,58 90,00 90,00 90,00 90,00 90,00 79,82 90,00 90,008 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 75,52 90,00 90,00 90,00 72,45 90,00 90,00 90,00 90,00 90,00 90,009 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 81,12 82,58 90,00 90,00 90,00 85,58 90,00 81,70 90,00

10 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 77,08 90,00 90,00 90,00 90,00 90,00 90,00 90,0011 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 85,20 90,00 81,15 90,00 90,00 90,00 85,2012 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 90,00 81,47 81,79 90,00 82,32 80,7313 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 82,87 83,14 77,08 90,00 90,0014 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 90,00 90,00 90,00 79,5215 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 90,00 90,00 90,0016 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 82,03 90,00 90,0017 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,00 85,3818 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 90,00 90,0019 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 81,3320 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00

Supporting convergence in groups

Overview of different variations for 4 people using a 5-point scale

145

Appendix N Overview of different variations for 4 people vote using a 5-point scale All options when using a 5-point scale and 4 votes per concept are shown in the chart below. The table on the next page lists the combinations in the columns together with the average and the standard deviation. The row(s) labelled ‘# var.’ indicate in how many ways the result of that column can be achieved. The sum of all #var.’s add up to 625, which is equal to all possible variations (=5x5x5x5). The colouring of each row indicate what each combination means. Green indicates accept, red indicates decline and orange indicates revote or discuss. This classification represents the personal preference of the author and is made by examining the scores per option. The second chart of this appendix shows the graphical representation of the table.

To come to the classification as is shown in the table on the next page the average value and the individual four voting scores are considered.

0.00

0.50

1.00

1.50

2.00

2.50

0.00 1.00 2.00 3.00 4.00 5.00 6.00

all possible voting outcomes (5-point scale, 4 votes per concept)

voting outcome

Supporting convergence in groups

Overview of different variations for 4 people using a 5-point scale

146

# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17Vote 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1Vote 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2Vote 3 1 1 1 1 1 2 2 2 2 3 3 3 4 4 5 2 2Vote 4 1 2 3 4 5 2 3 4 5 3 4 5 4 5 5 2 3Average 1.00 1.25 1.50 1.75 2.00 1.50 1.75 2.00 2.25 2.00 2.25 2.50 2.50 2.75 3.00 1.75 2.00St. Dev 0.00 0.50 1.00 1.50 2.00 0.58 0.96 1.41 1.89 1.15 1.50 1.91 1.73 2.06 2.31 0.50 0.82# var. 1 4 4 4 4 6 12 12 12 6 12 12 6 12 6 4 12

# 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34Vote 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1Vote 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 4 4 4Vote 3 2 2 3 3 3 4 4 5 3 3 3 4 4 5 4 4 5Vote 4 4 5 3 4 5 4 5 5 3 4 5 4 5 5 4 5 5Average 2.25 2.50 2.25 2.50 2.75 2.75 3.00 3.25 2.50 2.75 3.00 3.00 3.25 3.50 3.25 3.50 3.75St. Dev 1.26 1.73 0.96 1.29 1.71 1.50 1.83 2.06 1.00 1.26 1.63 1.41 1.71 1.91 1.50 1.73 1.89# var. 12 12 12 24 24 12 24 12 4 12 12 12 24 12 4 12 12

# 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51Vote 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2Vote 2 5 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3Vote 3 5 2 2 2 2 3 3 3 4 4 5 3 3 3 4 4 5Vote 4 5 2 3 4 5 3 4 5 4 5 5 3 4 5 4 5 5Average 4.00 2.00 2.25 2.50 2.75 2.50 2.75 3.00 3.00 3.25 3.50 2.75 3.00 3.25 3.25 3.50 3.75St. Dev 2.00 0.00 0.50 1.00 1.50 0.58 0.96 1.41 1.15 1.50 1.73 0.50 0.82 1.26 0.96 1.29 1.50# var. 4 1 4 4 4 6 12 12 6 12 6 4 12 12 12 24 12

# 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68Vote 1 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4Vote 2 4 4 4 5 3 3 3 3 3 3 4 4 4 5 4 4 4Vote 3 4 4 5 5 3 3 3 4 4 5 4 4 5 5 4 4 5Vote 4 4 5 5 5 3 4 5 4 5 5 4 5 5 5 4 5 5Average 3.50 3.75 4.00 4.25 3.00 3.25 3.50 3.50 3.75 4.00 3.75 4.00 4.25 4.50 4.00 4.25 4.50St. Dev 1.00 1.26 1.41 1.50 0.00 0.50 1.00 0.58 0.96 1.15 0.50 0.82 0.96 1.00 0.00 0.50 0.58# var. 4 12 12 4 1 4 4 6 12 6 4 12 12 4 1 4 6

# 69 70 =Vote 1 4 5 =

Vote 2 5 5 =

Vote 3 5 5 =

Vote 4 5 5 =Average 4.8 5 =St. Dev 0.5 0 =# var. 4 1

St. Dev. Max

consensus, acceptno consensus, revoteconsensus, decline

2.31

Σ # var.Theoretical # of variations

St. Dev. Min

6255 x 5 x 5 x 5 = 625

0.00

Supporting convergence in groups

Overview of different variations for 4 people using a 5-point scale

147

0.00

0.50

1.00

1.50

2.00

2.50

0.00 1.00 2.00 3.00 4.00 5.00 6.00

consensus, accept

no consensus, revote

consensus, decline

Supporting convergence in groups

Dividing concepts among participants for voting

149

Appendix O Dividing concepts among participants for voting Below an example is given of how a GSS can structure the division of concepts among the participants. In the example there are 10 participants (a – j) and 40 concepts (1-40). Each concept is initially rated 4 times. The GSS creates 4 variables for every concept, one for every vote. To distinguish between the 4 different votes, the variable names include v1, v2, v3 and v4. In the ideal case, every participant rates 16 concepts. To do so the GSS defines the matrix that is displayed in the figure below, in the matrix the rows represent the participants, followed by the concepts they have to rate. The GSS has to satisfy 2 rules when filling the rows. (1) there can be no similar numbers within one row and (2) the letters in the row cannot match the letter that identifies the participant. Further the system should try to make all rows equally long.

Dividing concepts for voting

GSS defines a number of variables per concept, equal to the number of desired votes per conceptThe variable consists of (1) the concept #, (2) the author tag and (3) a tag (v1….vn) to make each variable uniquethe variable therefore looks like : 'concept#''author tag.'uniqueness tag'

C = concept, A = authorC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16A a a a a a a b b b b b c c d d dv1 1a.v1 2a.v1 3a.v1 4a.v1 5a.v1 6a.v1 7b.v1 8b.v1 9b.v1 10b.v1 11b.v1 12c.v1 13c.v1 14d.v1 15d.v1 16d.v1v2 1a.v2 2a.v2 3a.v2 4a.v2 5a.v2 6a.v2 7b.v2 8b.v2 9b.v2 10b.v2 11b.v2 12c.v2 13c.v2 14d.v2 15d.v2 16d.v2v3 1a.v3 2a.v3 3a.v3 4a.v3 5a.v3 6a.v3 7b.v3 8b.v3 9b.v3 10b.v3 11b.v3 12c.v3 13c.v3 14d.v3 15d.v3 16d.v3v4 1a.v4 2a.v4 3a.v4 4a.v4 5a.v4 6a.v4 7b.v4 8b.v4 9b.v4 10b.v4 11b.v4 12c.v4 13c.v4 14d.v4 15d.v4 16d.v4

C 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32A e e f f f g g g i i i i j j j ev1 17e.v1 18e.v1 19f.v1 20f.v1 21f.v1 22g.v1 23g.v1 24g.v1 25i.v1 26i.v1 27i.v1 28i.v1 29j.v1 30j.v1 31j.v1 32e.v1v2 17e.v2 18e.v2 19f.v2 20f.v2 21f.v2 22g.v2 23g.v2 24g.v2 25i.v2 26i.v2 27i.v2 28i.v2 29j.v2 30j.v2 31j.v2 32e.v2v3 17e.v3 18e.v3 19f.v3 20f.v3 21f.v3 22g.v3 23g.v3 24g.v3 25i.v3 26i.v3 27i.v3 28i.v3 29j.v3 30j.v3 31j.v3 32e.v3v4 17e.v4 18e.v4 19f.v4 20f.v4 21f.v4 22g.v4 23g.v4 24g.v4 25i.v4 26i.v4 27i.v4 28i.v4 29j.v4 30j.v4 31j.v4 32e.v4

C 33 34 35 36 37 38 39 40A e c c g g g g jv1 33e.v1 34c.v1 35c.v1 36g.v1 37g.v1 38g.v1 39g.v1 40j.v1v2 33e.v2 34c.v2 35c.v2 36g.v2 37g.v2 38g.v2 39g.v2 40j.v2v3 33e.v3 34c.v3 35c.v3 36g.v3 37g.v3 38g.v3 39g.v3 40j.v3v4 33e.v4 34c.v4 35c.v4 36g.v4 37g.v4 38g.v4 39g.v4 40j.v4

Participants and concepts to vote ona 7b.v1 8b.v1 9b.v1 10b.v1 11b.v1 12c.v1 13c.v1 14d.v1 15d.v1 16d.v1 17e.v1 18e.v1 19f.v1 20f.v1 21f.v1 22g.v1b 1a.v1 2a.v1 3a.v1 4a.v1 5a.v1 6a.v1 23g.v1 24g.v1 25i.v1 26i.v1 27i.v1 28i.v1 29j.v1 30j.v1 31j.v1 32e.v1c 1a.v2 2a.v2 3a.v2 4a.v2 5a.v2 6a.v2 7b.v2 8b.v2 9b.v2 10b.v2 11b.v2 37g.v3 38g.v3 14d.v2 15d.v2 16d.v2d 12c.v2 13c.v2 22g.v3 23g.v3 24g.v3 25i.v2 26i.v2 27i.v2 28i.v2 29j.v2 30j.v2 31j.v2 29j.v4 30j.v4 31j.v4 40j.v2e 1a.v3 2a.v3 3a.v3 4a.v3 5a.v3 6a.v3 7b.v3 8b.v3 9b.v3 10b.v3 11b.v3 12c.v3 13c.v3 14d.v3 15d.v3 16d.v3f 1a.v4 2a.v4 3a.v4 4a.v4 5a.v4 6a.v4 7b.v4 8b.v4 9b.v4 10b.v4 11b.v4 12c.v4 13c.v4 14d.v4 15d.v4 16d.v4g 21f.v2 25i.v3 26i.v3 27i.v3 28i.v3 32e.v4 33e.v4 34c.v4 35c.v4 19f.v3 20f.v3 21f.v4 25i.v4 26i.v4 27i.v4 28i.v4h 33e.v1 34c.v1 35c.v1 36g.v1 37g.v1 38g.v1 39g.v1 40j.v1 22g.v2 23g.v2 24g.v2 39g.v2 17e.v2 18e.v2 19f.v2 20f.v2i 21f.v3 22g.v4 23g.v4 40j.v4 29j.v3 30j.v3 31j.v3 32e.v3 33e.v3 34c.v3 35c.v3 36g.v3 17e.v3 18e.v3 39g.v3 40j.v3j 36g.v4 37g.v4 38g.v4 39g.v4 24g.v4 32e.v2 33e.v2 34c.v2 35c.v2 36g.v2 37g.v2 38g.v2 17e.v4 18e.v4 19f.v4 20f.v4

Supporting convergence in groups

Detailed evaluation workshop descriptions

151

Appendix P Detailed evaluation workshop descriptions This appendix describes the detailed design and backgrounds for the three workshops that were executed to evaluate the FocusBuilder and Divide&Conquer thinkLets and the NWV based similarity detection technique.

Green-ICT workshop BER ICT GDR Workshop, 15 February 2010

Problem and background, motive for the session A group of TPM Bsc. Students following a course on Policy, Economics & Law in the ICT-domain has to write a paper on the topic of green-ICT. Green-ICT is a hot topic and concerns sustainability in information- and communication technology. Research has revealed that current usage of ICT is as environmental unfriendly as is the commercial aviation sector. The forecast is that the negative societal effects of using ICT will continue to grow in the future. These effects are, amongst others, energy consumption, disposal of waste and CO-emissions. The students face the task of writing a paper that formulates solutions directions to reduce the negative societal effects of using ICT.

The students are provided with a framework showing different aspects of the information society and their relationships. The framework includes NGO’s, the market, the government, the users and technology as a middle point. The framework facilitates the creation of an overview what actors are involved and in what way.

The structure of the paper that the students have to write is provided and looks as follows:

1. Introduction; context of the problem, actor analyses and overview of the domain 2. Framework; structured by a framework the students have to explain the choice for a design

of a policy plan that is elaborated upon in their paper. This section also contains a clear problem statement and choice of an actor perspective.

3. The choice for an arrangement of policy measures to diminish the negative societal effects of using ICT.

4. A plan to evaluate the proposed measures 5. A conclusion, containing a reflection on the points 2, 3 and 4.

Task analysis: Goal, deliverables, and objectives The objective of the workshop is to provide the students with a good starting point for writing their paper on a solution direction. This starting point should consist of a body of shared knowledge concerning the actors, factors and context variables surrounding the topic of green-ICT. The outcomes of the brainstorm should provide the students with enough knowledge to write the introduction of their paper. To achieve this the brainstorm will focus on the different actors in the field with their corresponding networks, perceptions, values and resources (Hermans & Thissen, 2009). Since all students are asked to study one publication regarding green-ICT in advance of the workshop, the brainstorm will facilitate sharing and combining their knowledge. After the brainstorm a convergence step is needed to consolidate and create shared understanding.

Besides these two learning goals the workshop is also intended to evaluate a new method for convergence in GSS supported groups. The method focuses on parallelizing the participants’ effort to

Supporting convergence in groups

Detailed evaluation workshop descriptions

152

create the converged artefact as is visualized in Figure 7.3.This method has already been used in other workshops. Based on the evaluation of the method in other workshops and expert opinions, small changes have been made.

The evaluation of the method is done on the process and result level. Coding of the results (the brainstorm and convergence artefacts) allows for the calculation of several interesting and relevant metrics. This happens after the workshop and does not require any input from the participants. For the evaluation of the process the input from the participants is needed. They are kindly asked to fill out a questionnaire, containing 30 questions, after the workshop. In order not to influence the participants in filling out the questionnaire, it is not mentioned in advance that the data will be used in a graduation project. Only afterwards the participants will be informed about this.

Participants: The group consists of 15 - 20 highly motivated and interested bachelor students in the technology, policy & management domain. Their stake is to gain knowledge on the factors of and actors present in the green-ICT field to write their paper.

Resource analysis: The time available for the workshop is 90 minutes, divided into two equal parts with a 15 minute break in between. A fully equipped GSS (ThinkTank) and one smart board are available in a dedicated meeting room. To allow the participants to use the GSS and to allow the subgroups to collaborate the room layout will be as visualized in the figure below. The chairs used have wheels, so they can be easily moved from table to table.

Process & result decomposition The workshop consists of two parts. Part 1 focuses on the factors regarding the topic of green-ICT; the second part focuses on the actor analysis.

Part 1 will generate a list of aspects that are relevant for the problem. The outcome of part 2 is a complete actor analysis of all relevant actors in the green-ICT domain. The actor analysis will be structured according to the ICT in context framework, as is developed by Ubacht.

Green-ICT workshop, room layout.

Supporting convergence in groups

Detailed evaluation workshop descriptions

153

Validation A walkthrough of the workshop design together with the teacher was done to validate the workshop design and assure the fit with the educational goals of the workshop.

Green-ICT workshop: overview of the workshop.

Agenda summary: 1. Introduction tool (Group Support System) 2. Introduction program 3. Brainstorm on the factors that constitute green-ICT 4. Creating one final list of factors 5. Brainstorm on actors in the green-ICT field 6. Add their values, stakes, perceptions and resources 7. Creating one final actor analyses

Energy transition workshop BER EWI GDR Workshop, 9 March 2010

Version 5 (7-3-2010) – final version

Problem and background, motive for the session The course SPM3530 focuses on policy, economics and law within the domain of energy, water and industry. The first 6 weeks of the 12-week course are devoted to the theme liberalisation, market organisation and control. Topics within this theme are motives for and effects of liberalisation, de- and reregulation, economic regulation and control, national and international competition rules. Learning goals for the course are (1) be able to give an advice or analysis regarding new or existing policy, or economic phenomena, (2) reflecting on positions, analyses and propositions in the field and (3) applying the new knowledge to other domains, on a high-level.

One of the ways of education is to stimulate debate among the students to help them to make the new knowledge ready for use and to stimulate synthesis of policy, economics and law. A GSS

Supporting convergence in groups

Detailed evaluation workshop descriptions

154

workshop is a good way to enable debate and to activate acquired knowledge, see for instance Kolfschoten (2007) or de Bruijn and de Vreede (1999). An additional motive to organize the workshop is to evaluate two convergence process designs in the light of a master thesis project. The two process designs have already been evaluated on a architecture level and a walkthrough with collaboration experts was conducted. This phase of evaluation is aimed to reveal how groups react to the processes and what results they produce. The learning goal of the workshop is of the most importance.

Within the complex field of policy, economics and law and energy, water and industry a number of questions are present, for instance regarding:

• Speeding up the energy transition process o ‘What are ways to speed up the energy transition process?’ o What are goals to do so? o How can the achievement of these goals be monitored or measured?

• The instruments / tools owned by the government to steer / guide. o ‘What instruments / ways / tools does the government have to influence this

process?’ o How successful are these instruments? o What are the motivations and implications to use the instruments?

• Criteria (for energy transition?) o ‘What are criteria for energy transition?’

• Goals, evaluation criteria o ‘What are goals (from different actors?) regarding energy transition?’ o ‘Given these goals, what would be criteria to evaluate the chosen policy?’

• boundary conditions, external factors o ‘Which boundary conditions can you identify?’ o ‘What are external factors in the field of energy transition?’

• Uncertainties o ‘What uncertainties can you identify, regarding energy transition?’

• But also questions regarding:

• Unbundling Transmission system operators • Coordination transmission system operators and TSOs and supervisors • Extend of competition within Europe • Investments in CO2-reduction and sustainability • The coordination of investment networks and production

– Monopoly – market – Alternative energy sources (wind)

• Security of supply of primary sources of energy

In close cooperation with the teacher it is chosen to use three relevant documents as input and background for the workshop. These are:

- Energierapport 2008, - AER-rapport 2009 on the regulation of energy networks - Clingendaelrapport on fuel policy

Supporting convergence in groups

Detailed evaluation workshop descriptions

155

We want to show the students the content of the documents in an interactive way and want to extract interesting elements to evoke discussion on the content.

Task analysis: Goal, deliverables, and objectives After an introduction the workshop starts by collaboratively defining goals for energy transition. After a quick round of convergence to select the most important goals, the goals will be made measurable by adding criteria.

The next question to be dealt with is how these goals can be achieved. To answer this question the students have read the three documents mentioned above. These documents partially answer this question. In the workshop we are interested in the instruments that are mentioned in the documents and in the instruments that the students come up with. After converging on the instruments from the three documents and the instruments that the students come up with themselves there is room for discussion. The discussion focuses on the content of the documents, the ‘instruments’ that the government has and the instruments that the students came up with.

Next, the group is split up in four sub-groups. Each sub group is asked to combine the instruments from the second question into a strategy for the minister. Each of the four subgroups presents their strategy to the entire group. The group is given the opportunity to react on the presentations (electronically) and score the strategy on (some of the) criteria defined in the first part of the workshop (electronically). The teacher will also react on the presentations in a plenary way.

Besides these learning goals the workshop is also intended to evaluate two new methods for convergence in GSS supported groups.

One method focuses on parallelizing the participants’ effort to create the converged artefact as is visualized in Figure 7.3. This method has already been used in other workshops. Based on the evaluation of the method in other workshops and expert opinions, small changes have been made.

The evaluation of the method is done on the process and result level. Coding of the results (the brainstorm and convergence artefacts) allows for the calculation of several interesting and relevant metrics. This happens after the workshop and does not require any input from the participants. For the evaluation of the process the input from the participants is needed. They are kindly asked to fill out a questionnaire, containing 30 questions, after the workshop.

The second method aims to speed up the process of selecting instrumental concepts from the brainstorm artefact. The method is based on rating concepts on a 5-point scale. Initially not all participants, but only 4 rate every concept. Based on the average score and standard deviation of each concept, the system determines whether a concept should be included for further consideration or can be disregarded. For all concepts of which the average score and standard deviation are indecisive the system collects votes of all participants in a second round of voting. This way of working is visualized in Figure 7.1.

The evaluation of the method is done on the process and result level. Coding of the results (the brainstorm and convergence artefacts) allows for the calculation of several interesting and relevant metrics. This happens after the workshop and does not require any input from the participants. For the evaluation of the process the input from the participants is needed. They are kindly asked to fill out a questionnaire, containing 30 questions, after the workshop.

Supporting convergence in groups

Detailed evaluation workshop descriptions

156

Participants Approximately 20 – 30 students will be present in the workshop. The language of the workshop is Dutch.

Resource analysis The time available for the workshop is 3,5 (210 minutes) hours from 0900 till 12.30, a fully equipped GSS is available.

Process & result decomposition The workshop can be broken down into four parts. The first part focuses on creating shared understanding about the field of energy transition and the formulation of goals for the energy transition. Next to goals, criteria to measure and monitor them are listed. In the second part of the workshop the participants combine their knowledge to create instruments to achieve these goals. The outcome is a pre-selection from a rough list of brainstormed instruments. In the third part of the workshop the participants are given this pre-selection to facilitate the formation of a strategy for energy transition. This strategy is a combination of different instruments. The group is divided into four subgroups; each group defines its own strategy. The fourth part of the workshop focuses on the presentations of the four groups. Also feedback on the presentations is collected.

Validation Several walk troughs with the teacher were performed.

Shared space workshop

Problem and background, motive for the session The course SPM3630 focuses on policy, economics and law within the domain of transport, infrastructure & logistics. Topics that are discussed within the course are the transport market and its actors, the history of government interventions, public transportation, market development regarding goods transport and public – private cooperation, regulation regarding the development of infrastructure and innovations. The learning goals are (1) insight into the economic and public administration policy theory and (2) insight into the involvement and interventions of the government in the transport market.

One of the ways of education is to stimulate debate among the students to help them to make the new knowledge ready for use and to stimulate synthesis of policy, economics and law. A GSS workshop is a good way to enable debate and to activate acquired knowledge, see for instance Kolfschoten (2007) or de Bruijn and de Vreede (1999). An additional motive to organize the workshop is to evaluate two convergence process designs in the light of a master thesis project. The two process designs have already been evaluated on a architecture level and a walkthrough with collaboration experts was conducted. This phase of evaluation is aimed to reveal how groups react to the processes and what results they produce. The learning goal of the workshop is of the most importance.

Task analysis: Goal, deliverables, and objectives Within the course a lecture from a guest speaker was organized. The topic of this guest lecture was traffic safety. In the lecture general aspects of traffic safety in the Netherlands were presented to the students in a historical overview. The lecture ended with the introduction of the shared space

Supporting convergence in groups

Detailed evaluation workshop descriptions

157

concept, more information on this concept can be found at the website of the Shared Space Institute (2010).

To stimulate the students to think about the benefits and concerns with respect to the realization of this concept they are asked to make a list of the top 5 to 10 most important benefits and concerns. For this exercise the students are given the role of advisor for a local city councillor that is 100% enthusiastic about the concept and wants to realize it, at a busy crossing in his city.

The objective of the exercise is therefore to stimulate the students to think of what the implementation of the concept would mean for traffic safety in the city. The deliverable is a list of prioritized benefits and concerns, based on which an advice to the councillor should be given.

Participants 17 bachelor students, of which 3 females and 14 males, in age ranging between 19 and 24. All participants had little or no work experience. The guest lecturer and workshop are an integral part of the course. Attendance to the workshop was not obligatory to pass the course.

Resource analysis Fully equipped GSS by TeamSupport.

Process & result decomposition The figure below visualizes the outline of the workshop. First the guest lecturer gives an overview of the development of road safety in the Netherlands and introduces the concept of shares space. To prepare the students, a week in advance a 2-page document with a general introduction of the concept was made available to the students, a copy can be requested from the author and is found on the CD-ROM. After the introduction of the concept the students are introduced into their role of advisor of a city councillor. Next a brainstorm on the benefits and concerns will be conducted. To make a selection of the most important benefits and concerns the Divide&Conquer thinkLet will be used. The two lists of selected benefits and concerns will be used as input for a discussion in which the advice to the councillor will be made.

Shared space workshop process outline.

Validation A walkthrough with the principal lecturer and guest lecturer was conducted.

Agenda evaluation workshops

159

Appendix Q Agenda evaluation workshops

Agenda green-ICT workshop

# Activity Question/ Assignment Result ThinkLet & Pattern

Time

1

Introduction Introduction Goal of the workshop Process to be followed GSS

Interest & motivation presentation 10 minutes

2 Finding factors that describe the green-ICT context / problem

‘What factors come to your mind when you think of green ICT?’ List of factors FreeBrainstorm, diverge

10 minutes

3 Removing redundancy, ambiguity from the brainstorm

I’ll divide you into four groups. Each group is responsible for sub set of factors. You are asked to remove redundant items from your category and to summarize the information, where possible. When you are done, we will present the information to each other. There is room for discussion.

Cleaned up list of factors, shared understanding of the factors (and thereby the problem)

FocusBuilder, converge

15minutes

4 Evaluation

&

Break

Please fill out the questionnaire and have some coffee. Evaluation data & refreshment for the participants

10 minutes

5

Brainstorm for relevant actors

‘The system shows 4 different categories of actors (NGO, government, market and users). Use your background knowledge and the insights from the paper that you have read to enter as much as possible actors in the correct categories.’

Overview of actors, categorized.

LeafHopper; diverge

10 minutes

6

Identifying actors’ values, roles, perceptions and

As you know from previous courses, actors have different values, roles, perceptions and resources. These have an impact on the design of green-ICT. For every actor we have just identified, write down the value, role, perception and resources. You can do this by

Actors and values, roles, perceptions and resources

LeafHopper, diverge

10 minutes

Supporting convergence in groups

Agenda evaluation workshops

160

Total: 90 minutes

Agenda energy transition workshop

resources double clicking an actor.

7

Removing redundancy, ambiguity from the brainstorm

I’ll divide you into four groups. Each group is responsible for one category of actors from the brainstorm. You are asked to remove redundant items from your category and to summarize the information, where possible. When you are done, we will present the information to each other. There is room for discussion.

Shared understanding and meaning of the actors and their stakes.

FocusBuilder, converge

20 minutes

8

Evaluation & wrap up

Please fill out the questionnaire. Explain use of questionnaire. The results of today’s workshop will be provided to you on Bb.

5 minutes

# Activity Question/ Assignment Result ThinkLet & Pattern Time

1

introduction Topic and subject introduction

Agenda of the coming 4 hours

GSS introduction

Goals & Deliverables

Shared understanding & commitment to lecture and workshop

Presentation by Dhr. De Vries and Gijs.

15 minutes

Part 1; Goals and criteria (start 09:15, preferably 09:00)

2

Brainstorm l

Identify and write down all goals for energy transition that you can think f

Rough list of goals FreeBrainstorm 10

Agenda evaluation workshops

161

3

Selecting most important goals

‘Because the list of goals you have created is this long, we want to make a selection. This happens in two rounds of voting. In the first round the system divides the goals in such a way that only four votes per goal are collected. This helps to speed up the selection process. In the second round we will vote on all goals for which the originally collected four votes was not decisive. The system used the average and standard deviation for this.

Selection of rough list of goals.

Parallel selecting 10 minutes

4 Cleaning up the selection of goals

The system has divided the goals into three (or two...) groups. [in the case of three, a short discussion on the indecisive category]

Clean list of measurable goals

ReviewReflect

Facilitated discussion Dhr. De Vries & Gijs

10 minutes

# Activity Question/ Assignment Result ThinkLet & Pattern Time

5

Selecting Goals and adding criteria

Together with your 2 neighbours look at the list, and pick out one goal and think of a way to make this goal measurable.

[write the goal on the paper in front of you]

+/- 8 goals with criteria

15 minutes

Part 2; instruments (start 10:00, pref. 09:45)

6

Identify instruments from literature

From the 3 documents that you have read, select and write down the instruments to achieve the goals that we have identified.

Rough list of instruments

Free Brainstorm 10 minutes

7 Brainstorm instruments

In addition to these instruments, can you think of any other?

Radio buttons.

Rough list of instruments

FreeBrainstorm 10 minutes

Supporting convergence in groups

Agenda evaluation workshops

162

8 [only when needed]

Selecting most important instruments

‘Because the list of instruments you have created is this long, we want to make a selection. This happens in two rounds of voting. In the first round the system divides the goals in such a way that only four votes per goal are collected. This helps to speed up the selection process. In the second round we will vote on all goals for which the originally collected four votes were not decisive. The system used the average and standard deviation for this

[only first round is used; the goal is to separate the on-topic, from the off-topic instruments]

Selection of most important instruments

Parallel selecting 10 minutes

9 Discussing results instrument selection

10 minutes

(9) First round of convergence on instruments

‘You are now going to work in groups, the output of the previous voting round is displayed on your screens. Together with your group, combine similar instruments and separate the instruments that you and your group consider useful from the others. From the list you create, extract 5 instruments that you consider useful. After this activity, you will be shortly given the opportunity to share your top five with the other groups; the teacher will react to this and provide you with feedback and tips.

Creating a clean list of instruments.

8-voudige FocusBuilder

FocusBuilder (round 1)

9 Discussion on instruments

Regarding this list of instruments what conclusions can be made Shared understanding

Facilitated discussion, Dhr, de Vries & Gijs

20 minutes

Break [this break is needed to prepare the input for the next part] 15 minutes

Part 3; defining strategies

Agenda evaluation workshops

163

To help the students in executing task 9 the TeamSupport tool can be used. Because of the group size and the available time, it is chosen to work with four groups only. The implicated large group size asks for a tool to support collaboration. In previous projects students were very positive on using this tool to support the needed collaboration to finish their project. This is described by Kolfschoten and Lee (2010). Their findings are incorporated into the instruction document that is presented to the students.

Agenda shared space workshop The agenda and time planning is as follows:

9

Define your own strategy

Select instruments and combine them into a strategy, with your group

Based on the output of the previous activity you will get a list of the instruments that you came up with. The list still contains redundant instruments. Use the instructions on your table to clean up the list and select the instrument(s) that you find most promising.

Prepare a short (PowerPoint?) presentation

Template?

For example:

1. Strategies (3-5)

2. Motivation (match with a goal)

3. Uitwerking

4. Implication(s)

4 strategies Parallel group work 35

minutes

10 Presenting & reflecting on strategies

Each group presents, the other groups reflect by typing in their comments. After each presentation there is limited time for reflection & discussion, based on the comments

4 strategies presented

presentations 45 minutes

# Activity Question/ Assignment Result ThinkLet & Pattern Time

Supporting convergence in groups

Agenda evaluation workshops

164

1

Guest lecture Traffic safety & shared space

Background and introduction of

presentation 30 minutes

2

Brainstorm benefits shared space concept

What benefits do you see regarding the realization of shared space

Rough list of perceived benefits

Free Brainstorm 10 minutes

3

Pre-selection benefits

Rate every benefit on a 5-point scale Selection of rough list of benefits

Divide&Conquer, 2 rounds 15 minutes

4

Brainstorm concerns shared space concept

What concerns do you see regarding the realization of shared space

Rough list of perceived concerns

Free Brainstorm 10 minutes

# Activity Question/ Assignment Result ThinkLet & Pattern Time

5

Pre-selection concerns

Rate every concern on a 5-point scale Selection of rough list of concerns

Divide&Conquer, 2 rounds 15 minutes

6

Presentation of the selected benefits and

Are these results surprising? Reaction from the guest lecturer

Shared understanding benefits and

Plenary presentation of prioritized list of selected benefits and concerns.

15 minutes

7

Combining the benefits and concerns into an advice

Advice to the councillor

Plenary presentation and discussion 25 minutes

Result oriented data of the evaluation workshops

165

Appendix R Result oriented data of the evaluation workshops

Green-ICT workshop data

Green-ICT workshop, result oriented data

RC RC-on RC-off RC-U RC-A RCD RCD-unique RCD-redundant RCD-U RCD-A Time [min] candidates for criticalness

Brainstorm factors 87 75 12 50 37 115 62 53 70 45 12 54 FocusBuilder round 1 61 61 0 48 13 74 57 17 60 14 9 57 FocusBuilder round 2 55 55 0 44 11 67 50 17 57 10 11 50 FocusBuilder round 3 56 56 0 54 2 69 58 11 67 2 12 58 Brainstorm actors 230 185 45 186 44 250 231 19 200 50 21 195

Energy transition workshop data

Energy transition workshop, result oriented data

RC RC-on RC-off RCU RCA RCD RCD-Unique RCD-Redundant

RCD-unamb. RCD-amb. Time

Context Brainstorm 68 65 3 61 7 95 68 27 87 8 8

Contest Pre-selection, round 1/1 Total 68 65 3 61 7 95 68 27 87 8 8

Accepted 35 35 0 33 2 55 43 12 53 2

Declined 18 16 2 15 3 21 16 5 18 3

Unsure 15 14 1 13 2 19 9 10 16 3

Goals Brainstorm 59 55 4 50 9 74 56 18 65 9 6

Goals Pre-selection, round 1/1 Total 59 55 4 50 9 74 58 16 65 9 8

Accepted 40 39 1 35 5 51 41 10 46 5

Declined 11 9 2 9 2 12 10 2 10 2

Unsure 8 7 1 6 2 11 7 4 9 2

Goals selection 8 8 0 8 0 9 9 0 9 0

Instruments Brainstorm 129 105 24 86 43 133 99 34 90 43 10

Instruments Pre-selection round 1/2 Total 129 105 24 86 43 133 99 34 90 43 5

Accepted 46 46 0 41 5 48 30 18 43 5

Declined 56 34 22 28 28 58 49 9 30 28

Result oriented data of the evaluation workshops

166

Unsure 27 25 2 17 10 27 20 7 17 10

Instruments Pre-selection round 2/2 Total 129 105 24 86 43 133 99 34 90 43 5

Accepted 56 56 0 48 8 58 35 23 50 8

Declined 73 49 24 38 35 75 64 11 40 35

Instruments selection Group 1 41 41 0 37 4 42 41 1 38 4 60

Instruments selection Group 2 36 36 0 36 0 36 35 1 36 0 60

Instruments selection Group 3 55 55 0 50 5 55 34 21 51 4 60

Instruments selection Group 4 49 49 0 45 4 49 31 18 45 4 60

Shared space workshop data

Shared space workshop, result oriented data

RC RC-on RC-off RCU RCA RCD RCD-Unique RCD-Redundant RCD-unamb. RCD-amb. time

brainstorm benefits 46 39 7 38 8 55 51 4 47 8 10

1st round voting totals 46 39 7 38 8 55 51 4 47 8 3

accepted 17 17 0 17 0 22 21 1 22 0

declined 16 10 6 9 7 16 14 2 9 7

unsure 13 12 1 12 1 17 16 1 16 1

2nd round voting totals 46 39 7 38 8 54 50 4 46 8 3

accepted 20 20 0 20 0 28 27 1 28 0

declined 26 19 7 18 8 26 23 3 18 8

final selection 5 5 0 5 0 5 5 0 5 0 30

Supporting convergence in groups

Participant comments to the evaluation workshops

167

Appendix S Participant comments to the evaluation workshops

Green-ICT workshop

participant Wat vond je van het proces dat gevolgd werd in de workshop?

What do you think of the process of the workshop

Wat vond je van de begeleiding?

What do you think of the facilitation?

1 Gepast, nieuw Suitable, new Enthousiast enthusiastic 2 Logisch opgebouwd Logical Ok! ok 3 Ik vond het interessant I found it interesting Prima fine 4 Snel, + Fast, + Goed en duidelijk Good and clear

5 Oke, opzet prima Ok, design was fine De begeleiding was uitstekend

The facilitation was excellent

6 Goed gestructureerd Well structured Prettig, open, betrokken

Pleasant, open, committed

7 prima Fine prima fine 8 De flow was wel aardig The ‘flow’ was ok Wel goed Just ok 9 Voldoende Sufficient goed ok 10 - - - - 11 Beetje nutteloos A little useless Wel goed Just ok 12 Goed, soepel Good, fluent goed ok 13 Logische opbouw Logical Enthousiast, A++ Enthusiastic, A++

Table 1, green-ICT workshop: participant comments on process and facilitation

Participant Wat vond je van het resultaat van de workshop? What do you think of the result of the workshop

1 Verwacht, voldoende As expected, sufficient 2 Biedt genoeg uitgangspunten voor vervolg Offers enough reference for follow-up 3 Ok OK

4 1e deel goed, 2e deel moeilijker First part good, second part more difficult

5 Matig, maar er zit ook niet meer in Moderate, but difficult to achieve better result

6 Deel 1 erg goed, deel 2 iets minder, minder tijd / diepgang Firts part very good, 2nd part little less, less time and superficial

7 Matig, kleine ideen, geen uitgebreide Moderate, only little ideas, no elaboration

8 Niet wereldschokkend Not earthshaking 9 Matig moderate 10 - - 11 Grappig funny 12 Niet voldoende afgebakend Not sufficiently demarcated 13 Redelijk / goed Reasonable / good

Table 2, green-ICT workshop: participant comments on result

Energy transition workshop

participant Wat vond je van het proces dat gevolgd werd in de workshop?

What do you think of the process of the workshop

Wat vond je van de begeleiding?

What do you think of the facilitation?

1 goed Good goed Good 2 Prima, logisch Fine, logical Duif is gek? Pigeon is crazy

3 Prima, gestructureerd Fine, structured Goed, helder uitgelegd

Good, fine explanation

4 Duidelijk, je werd aan de hand genomen

Clear, good process guidance

goed Good

5 Goed, ik vond alleen het ordenen en rangschikken van de uitkomsten onvoldoende

Good, only ordering the outcomes was insufficient

Goed, duidelijk Good, clear

6 goed Good goed Good 7 inefficiënt Not efficient matig Moderate 8 - - uitstekend Perfect

Participant comments to the evaluation workshops

168

9 Duidelijk, snel, overzichtelijk Clear, fast, comprehensive

Duidelijk, helder Clear

12 Efficient, wel graag meer selectie mogelijkheden

Efficient, I would like to have more selection methods

Zeer goed Perfect

13 - goed Good 14 Soepel verlopend Smooth duidelijk Clear 15 Interessant Interesting goed Good 16 Iets meer structuur nodig Structure lacking goed Good

Table 3, energy transition workshop, participant comments on process and facilitation

Participant Wat vond je van het resultaat van de workshop? What do you think of the result of the workshop?

1 Matig (wel naar verwachting) Moderate (meeting expectations) 2 Prima Fine 3 Zou nuttig kunnen zijn Could be useful 4 Matig, weinig tijd om echt zinnige dingen te zeggen Moderate, little time to say meaningful things 5 Goed, snelle ordening redelijk eenduidig Good, fast ordering, unambiguous 6 Matig Moderate 7 Matig Moderate 8 Leerzaam Instructive 9 Te kort door de bocht Taking turns on two wheels 12 Helder Clear 14 Voldoende Sufficient

15 Mooi om te zien dat iedereen verschillende richtingen op ging

Nice to see that every groups highlights a different aspect

16 Voor de beperkte tijd goed Good, regarding the limited time

Table 4, energy transition workshop, participant comments on result

Participant Opmerkingen selectie methode & software (TeamSupport)

Remarks regarding the selection method & software (TeamSupport)

1

Software kan logischer, soms werd er gereageerd wat dan gezien werd als los doel. Met andere woorden waar is functie om een reactie te koppelen aan een al eerder genoemd doel.

Software should be more logical, sometimes reactions were interpreted as new goals. A function is needed to annotate comments to items already in the list.

2 Anonimiteit nodigt in deze groep uit tot onzin, resultaten zijn dus niet optimaal, technisch is prima, soms f5 nodig

In this Group anonymity leads to non-sense. Therefore the results are below optimal. Technically sound, sometimes f5 needed.

6

Matig onderbouwde antwoorden die niet constructief zijn en dus onbruikbaar. Verder zijn het geen nieuwe ideeën. Deze brainstorm en het stemmen is een specifieke methode voor een zeer onspecifieke discussie. Misschien samenvoegen van dezelfde categorieën maakt het efficiënter. De resultaten uit de workshop werden niet direct gebruikt voor de presentaties, omdat deze te algemeen en weinig vernieuwend waren.

Moderately reasoned answers, that are not constructive and therefore useless. Further no new ideas. This brainstorm and voting are specific methods for a very unspecific discussion. Combining redundant concepts could make it more efficient. The results from the workshop were not directly used for the presentations, because the results were too general and too less innovative.

8 Zijn 4 stemmen per optie wel genoeg om voldoende representativiteit te bereiken?

Are 4 votes per option sufficient to get representative results?

9

De voting of het 2e pijltje heeft een selectie gemaakt, waarvan vaak dezelfde onderwerpen terug komen ipv een noemer worden geschaard Moeilijk om gebrainstormde middelen goed te verdedigen, omdat je ze zonder na te denken moest uittypen en dan opeens in stap 3 de slag moest maken om met een advies te komen. Te korte tijd voor de

The voting (or the 2nd arrow) made a selection in which often the same subjects are present. These could better be merged. Difficult to defend the brainstorm instruments, because we had to enter them without thinking. Too little time for the presentation.

Supporting convergence in groups

Participant comments to the evaluation workshops

169

presentatie. 13 Verkregen ideeën waren lastig te verwerken voor ppt The ideas were hard to use for the presentation

15

Veel onzin, maar deze kan er snel uitgefilterd worden Veel dezelfde (oplossingen, doelen) verwoordingen In hoeverre is er controle op ideeën die declined zijn?

Lots of non-sense, but it was very easy to filter out. A lot of redundant concepts (solutions, goals) How are the ‘declined’ ideas controlled?

16 Het zou helpen als het doel duidelijk op het scherm zou staan

It would help if the goal would be made clear on the screen.

Table 5, energy transition workshop, other participant comments (method and software)

Shared space workshop

participant Wat vond je van het proces dat gevolgd werd in de workshop?

What do you think of the process of the workshop

Wat vond je van de begeleiding?

What do you think of the facilitation?

2 Goed, jammer van de systeemfout

Good, the system failure was a pity

Goed, prima Good, fine

3 goed Good goed Good 4 mooi Nice goed Good 5 goed Good goed Good 6 voldoende Sufficient voldoende Sufficient 7 Ging snel en soepel Fast & fluent helder Clear

8 Verliep goed, jammer van failure

Went well, the system failure was a pity

Goed, helder verteld Good, clear explanation

9 Werkte goed Worked well goed Good

10 leuk Fun Zeer goed en betrokken

Very good and committed

11 Goed en gestructureerd Good and structured duidelijk Clear

12 Helder, vooral 1e programma Clear, especially the first programme

Helder & duidelijk Clear &

13 Ging goed Went well Prima Fine

14 Goed, goed gereageerd op falen systeem

Good, good reaction to system failure

Positief Positive

15 Duurde veel langer dan nodig Took much more time than needed

Goed Good

16 Leuk Fun Goed Good

17 1e workshop zeer goed 1st workshop very good

Goed Good

Table 6, shared space workshop, participant comments on process and facilitation

Participant Wat vond je van het resultaat van de workshop? What do you think of the result of the workshop?

2 goed Good 3 goed Good 4 interessant Interesting 5 tevreden Satisfying 6 voldoende Sufficient 7 Het resultaat kwam overeen met de verwachting Result meets expectations 8 prima Fine 9 Redelijk, tot goed Moderate to good 10 goed Good 11 Zoals veracht, gewenste resultaat As expected, desired result 13 prima Fine 14 Goed Good 15 Goed, maar wat hebben we eraan? Good, but what use is it? 16 Interessant Interesting

17 Leuk (om te zien dat het klopt met de werkelijkheid)

Nice (to see that it corresponds with the reality)

Table 7, shared space workshop, participant comments on result

Participant comments to the evaluation workshops

170

Participant Remarks Remarks

4 Voor/nadelen die dubbel staan wordt niet goed mee omgegaan

Redundant concerns / benefits are not handled ok.

5 Jammer van de switch! A pity that we had to make the switch

9 Vond eerste programma beter, score geven i.p.v. aanvinken

Liked the First program better, scoring instead of placing checkmarks

10 Wellicht beter geweest als verschillende – op elkaar lijkende- ideeën, werden gebundeld

It would have been better if redundant ideas had been merged.

Table 8, shared space workshop, general participant comments

Supporting convergence in groups

Divide&Conquer thinkLet TeamSupport Mock-up’s

171

Appendix T Divide&Conquer thinkLet TeamSupport Mock-up’s

Facilitator edit session screen

Facilitator edit session screen

Divide&Conquer thinkLet TeamSupport Mock-up’s

172

Facilitator initiation screen

Facilitator 1st round result screen

Supporting convergence in groups

Divide&Conquer thinkLet TeamSupport Mock-up’s

173

Facilitator 2nd round result screen

Participant 1st round voting screen

Divide&Conquer thinkLet TeamSupport Mock-up’s

174

Participant 2nd round voting screen

Facilitator & participant end of module screen

Supporting convergence in groups

FocusBuilder thinkLet TeamSupport Mock-up’s

175

Appendix U FocusBuilder ThinkLet TeamSupport Mock-up’s

Facilitator edit session screen

Facilitator session settings screen

FocusBuilder thinkLet TeamSupport Mock-up’s

176

Facilitator overview screen 1st round

4 similar participant screens for 1st round

Supporting convergence in groups

FocusBuilder thinkLet TeamSupport Mock-up’s

177

Facilitator overview screen 2nd round

2 similar participant screens for 2nd round

FocusBuilder thinkLet TeamSupport Mock-up’s

178

Facilitator overview screen for final round

TeamSupport screenshots: support for the Divide&Conquer thinkLet

179

Appendix V TeamSupport screenshots: support for the Divide&Conquer thinkLet

Brainstorming. Left: participant view, right: facilitator / beamer view

First round Divide&Conquer thinkLet. Left: participant view, right: facilitator / beamer view

TeamSupport screenshots: support for the Divide&Conquer thinkLet

180

First round Divide&Conquer thinkLet. Left: participant view, right: facilitator / beamer view

Results first round Divide&Conquer thinkLet. Left: participant view, right: facilitator / beamer view

TeamSupport screenshots: support for the Divide&Conquer thinkLet

181

Results second round Divide&Conquer thinkLet. Left: participant view, right: facilitator / beamer view

Results second round Divide&Conquer thinkLet. Left: participant view, right: facilitator / beamer view

Supporting convergence in groups

CD-ROM

183

Appendix X CD-ROM