Staging Reflections on Ethical Dilemmas in Machine Learning

12
Staging Reflections on Ethical Dilemmas in Machine Learning: A Card-Based Design Workshop for High School Students Leave Authors Anonymous for Submission City, Country e-mail address Leave Authors Anonymous for Submission City, Country e-mail address Leave Authors Anonymous for Submission City, Country e-mail address a) b) Figure 1: a) A group of high school students discussion what data is relevant for their idea about a new ML system using a deck of ML data cards; b) A group of students using our ML board to describe how ML is used in their idea. . ABSTRACT The increased use of machine learning (ML) in society raises the question of how ethical dilemmas and choices inherent in computational artefacts can be made understandable and explorable for students. To investigate this, we developed a card-based design workshop in which high school students confront and reflect upon ethical dilemmas by designing their own ML applications. The workshop was developed in an iterative process engaging four high school classrooms with students aged 16-20. Through iterations on the design of the workshop we found that a) understanding of fundamental ML served to qualify students’ ethical reflections, b) students’ design process served to them reveal to the complexity of the ethical dilemmas and tie them to the properties of the technology and to design decisions, c) while we were able to stage qualified reflections regarding ethical dilemmas, students struggled with addressing these dilemmas in their designs. Author Keywords Computational Empowerment; Computational Thinking; Machine Learning; Ethics; Design Processes; Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. DIS2020, July 06–10, 2020, Eindhoven, NL © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 123-4567-24-567/08/06. . . $15.00 DOI: http://dx.doi.org/10.475/123_4 CCS Concepts Human-centered computing Interaction design pro- cess and methods; INTRODUCTION The current focus on teaching Computational Thinking (CT) [46] across all levels of education, and the increased ease of incorporating emerging technologies such as machine learning (ML) into new applications raise the importance of investigating how ethical aspects of new technologies can be made explainable and explorable to future designers and consumers of these technologies. In recent years, ML has changed what is possible to achieve with the use computational processing, expanding computers’ ability to understand and interact with the world. This devel- opment promises great new possibilities, but is also currently transforming the nature of social interaction, work, educa- tion, etc. [33], increasing information asymmetry [37] and making technologies less comprehensible to users [1]. This introduces a range of ethical dilemmas and issues unique to ML. These ethical issues have been addressed in both the ML community through, e.g., the ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*), as well as in the HCI community [1, 39, 14], which implore that implement- ing ML into the world includes making ethical judgements and approaching ethical issues in the system design. Today, companies are integrating ML into everyday technolo- gies and infrastructures such as smart phones, maps, streaming services, etc., to provide better services to customers and to

Transcript of Staging Reflections on Ethical Dilemmas in Machine Learning

Staging Reflections on Ethical Dilemmas in MachineLearning: A Card-Based Design Workshop for High School

StudentsLeave Authors Anonymous

for SubmissionCity, Countrye-mail address

Leave Authors Anonymousfor SubmissionCity, Countrye-mail address

Leave Authors Anonymousfor SubmissionCity, Countrye-mail address

a) b)Figure 1: a) A group of high school students discussion what data is relevant for their idea about a new ML system using a deck of ML data cards; b) Agroup of students using our ML board to describe how ML is used in their idea. .

ABSTRACTThe increased use of machine learning (ML) in society raisesthe question of how ethical dilemmas and choices inherentin computational artefacts can be made understandable andexplorable for students. To investigate this, we developed acard-based design workshop in which high school studentsconfront and reflect upon ethical dilemmas by designing theirown ML applications. The workshop was developed in aniterative process engaging four high school classrooms withstudents aged 16-20. Through iterations on the design ofthe workshop we found that a) understanding of fundamentalML served to qualify students’ ethical reflections, b) students’design process served to them reveal to the complexity ofthe ethical dilemmas and tie them to the properties of thetechnology and to design decisions, c) while we were able tostage qualified reflections regarding ethical dilemmas, studentsstruggled with addressing these dilemmas in their designs.

Author KeywordsComputational Empowerment; Computational Thinking;Machine Learning; Ethics; Design Processes;

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].

DIS2020, July 06–10, 2020, Eindhoven, NL

© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.ISBN 123-4567-24-567/08/06. . . $15.00

DOI: http://dx.doi.org/10.475/123_4

CCS Concepts•Human-centered computing → Interaction design pro-cess and methods;

INTRODUCTIONThe current focus on teaching Computational Thinking(CT) [46] across all levels of education, and the increasedease of incorporating emerging technologies such as machinelearning (ML) into new applications raise the importance ofinvestigating how ethical aspects of new technologies canbe made explainable and explorable to future designers andconsumers of these technologies.

In recent years, ML has changed what is possible to achievewith the use computational processing, expanding computers’ability to understand and interact with the world. This devel-opment promises great new possibilities, but is also currentlytransforming the nature of social interaction, work, educa-tion, etc. [33], increasing information asymmetry [37] andmaking technologies less comprehensible to users [1]. Thisintroduces a range of ethical dilemmas and issues unique toML. These ethical issues have been addressed in both the MLcommunity through, e.g., the ACM Conference on Fairness,Accountability, and Transparency (ACM FAT*), as well as inthe HCI community [1, 39, 14], which implore that implement-ing ML into the world includes making ethical judgementsand approaching ethical issues in the system design.

Today, companies are integrating ML into everyday technolo-gies and infrastructures such as smart phones, maps, streamingservices, etc., to provide better services to customers and to

build new business models. However, if ethics are not care-fully considered (and they often are not), ML can becomea tool for unethical conduct, resulting in dark patterns [11],hypernudging [47], etc. Designers and developers are navi-gating a myriad of different interests, balancing ethics againstsystem efficiency, profitability, user convenience and are con-stantly making moral decisions and value judgements. Ethicsare difficult, especially when applied in the real world, anddesigners and users alike are often faced with dilemmas, whendesigning and using ML-powered products. We argue, that totake active part in shaping the future, understanding everydaytechnologies is imperative, and thus youth should be able torecognise these dilemmas in technologies they interact with,and be able to reason about their consequences.

Within CT this more critical perspective is gaining momen-tum [23, 22, 43]. For instance, Computational Empowerment(CE), as proposed by Iversen et al. [22], advocates that studentsshould be able to recognise the ethical choices and consider-ations in technology and learn to decode "the consequencesof these choices for the people who will use the technology".Iversen et al. address this by having children go through adesign process which includes reflecting on the more criticalaspects of their designs. This design process utilises digitalfabrication techniques such as 3D printing and laser cutting,and while these are effective design tools, we argue, that todeal with specific, technology-close ethical issues such as theones described above, students need hands-on experience withthe technology in question, e.g., ML. Involving students in de-sign processes and giving them hands-on experience has beena part of CT from the start [36]. The purpose of doing so inCT has, however, been to turn technologies and computationalconcepts into powerful tools for the users, enabling them toapproach problems in new ways and to explore new ideas.This focus on making powerful tools rarely leaves room forethical considerations; instead CT often operates in simplis-tic micro-worlds [36], which are "undisturbed by extraneousquestions" [36, p. 12], where ethical problems do not exist.

Inspired by CE, we explore how to involve high schools stu-dents in designing ML-systems in a way that supports explo-ration and reflection of technology-close, ethical dilemmasinherent in ML. To do so, we design and deploy a ML EthicsWorkshop, in which high school students use a ML specific setof card decks for designing ML applications. The workshopplaces ethical reflections at the center of the design activitiesand discuss technology-close ethics based on their own designsof ML systems. We deploy four iterations of the workshopin four different classrooms engaging 71 high school studentsaged 16-20. The materials used in the ML Ethics Workshopcan be found here: [LINK].

The paper contributes to the fields of HCI and CT, througha) a design rationale for an ethics-first construction kit forML, b) experiences and insights from a Constructive DesignResearch process [28, 27] in which the ML Ethics Workshopwas designed, and finally c) a discussion of the implicationsof using design processes for teaching students about ML andto reflect on and discuss technology-close ethical dilemmas ofits use, based on findings from the ML Ethics Workshop.

RELATED WORKIn this section we briefly review existing literature on usingdesign as a learning approach, on card-based design methodsand on involving teenagers in design processes.

Design as a Learning ApproachDesign as an approach for learning about technology can bedated back to Papert’s constructionism [36, 18]. Since then,other researchers have expanded on these ideas (e.g., [24, 12])and today, digital fabrication, Fab-Labs and maker spaces arepresent at schools across the world [35, 16]. In a Scandina-vian context, this approach has also been explored in recentyears with a more critical approach to technology [5, 41, 22,44]. The [email protected] project [22, 41] engages stu-dents in digital fabrication with different technologies to em-power them to decode and understand their future, technology-mediated world [22]. To explore this Smith et al. presents adesign process model to use in educational contexts for using"digital fabrication as a reflective and material tool for work-ing with real-life and complex societal contexts" [41]. Erikssonet al. [5] argue for the need of a wide digital design literacy,which aims to "raise awareness about decision-making intechnology design, the potential impact of technology and,ultimately, whether it contributes to meaningful relationships".The authors argue that this is best achieved in design processes,where students make design-decisions for real-world settingsand are able to see the implications of these decisions.

Cards-based Design MethodsIn recent years, the use of card-based design methods andtools have accelerated, and today more than 155 different card-sets can be found in design literature [38]. The use of cardsin design processes has been found to enhance user-drivendesign processes, since they can help structure and scaffolddesign processes[15, 34]. Cards provide a common objectof interest to participants, and thus can also help engage allparticipants [45]. Friedman and Hendry use their Envision-ing Cards [9] to bring a more human-centered focus into thedesign process. The envisioning cards are designed to workacross different contexts and technologies, and implore de-signers to ,e.g., consider children as possible stakeholders ofa system or to consider how deliberately withdrawing fromusing a system might affect a (non) user’s everyday life. Sit-uation cards [31] are cards with descriptions of realistic andproblematic situations at a workplace, which participants dis-cuss in a participatory workshop to encourage them to comeup with ideas, that solves everyday problems at the workplace.Another example is inspiration cards [15], which are used indesign processes with disparate participants to improve en-gagement and support generation of innovative and realisticdesign concepts with new technologies, letting the participantscreate posters with the cards. Card-based design methods, thatfocus on a specific technology are few and far between [38],but there are a few examples, e.g.: Tiles is a card-based toolkitfor designing internet-of-things applications [34]. The toolkitconsists of different card categories and a paper-board, whereparticipants can organise cards into meaningful IoT applica-tions. Tiles supports reflection on users’ creations through the

Criteria Cards, which are designed to evaluate ideas by dis-cussing subjects such as feasibility or sustainability. Mavroudiet al. successfully deploy the cards in a lower secondary schoolsetting [29], finding Tiles is a low-cost way of having students,with little to no prior knowledge of the subject, design anddiscuss IoT applications. They also evaluate students under-standing of privacy issues related to IoT after using the Tilesbut never clarify how it supports learning and exploration ofthese ethical issues. Another similar, technology-close card-based method is PlutoAR [40], which is designed for K-10classrooms to let students design and create their own ARapplication. Here, cards act as AR-tags, and a paper-board,named the Launchpad, lets students organise the AR cards,which can be interpreted by a mobile application, that turnslaunchpads into scenarios, that play out in the application. Assuch, students are able to create applications, that e.g. launchesand guides a rocket through a maze.

We agree with Eriksson et al. [5] that involving studentsin design processes is a useful way of teaching studentsabout decision-making in technology. However, in contrastto the digital fabrication-based design processes in the [email protected] project, we argue that to discuss technology-close ethical issues about a specific technology, e.g., ML,students should work with said technology. To let studentsdesign ML systems without having to learn to how to program,we draw on the advantages of using card-based design meth-ods. As other work has shown [34, 29, 40] using technology-specific cards in design workshops is an advantageous wayof engaging students who lack, e.g., programming skills indesigning with specific technologies.

Designing With TeenagersIn the [NATIONAL] high school system, the average studentis approximately between 16 and 20 years old, making most ofstudents participating in this work teenagers. Fitton et al. [6]argue, that since teenagers are soon-to-be adults, they shouldbe engaged and involved in the shaping of future technologies,which they will eventually become users of. Further, Iversen& Smith [21] argue that this involvement can be achieved byproviding teenagers with "meaningful alternatives to existingtechnologies" by involving them in design processes of newtechnology. Involving teenagers in such processes is, however,challenging [6, 17, 20]. Hansen & Iversen [17] present anapproach for teen-centric PD, in which motivation is basedon encouragements, tools, technology, identification, cooper-ation, endorsements as experts, and performance. Similarly,Iversen et al. [20] seek to understand how teenagers are moti-vated in PD projects by analysing a number of PD workshopswith teenagers, and propose similar specific steps to engagestudents. In the interventions reported on in this paper, we en-countered a few issues that we believe are specific to workingwith teenagers. In particular, many students were boundary-pushing and extreme in the ways in which they discussed.These issues are reported on in the Findings section of thispaper.

THE ETHICS OF COMPUTATIONAL PRODUCTSAs products utilising computational processes and the infras-tructures in which they are implemented become more com-

plex, the ethical issues in their design and implementation alsobecome more complex [4, 8]. Computers’ ability to processinformation about individuals more efficiently than ever hasmany benefits, but it also introduces new ethical dilemmaswhich often do not have clear-cut solutions [7]. Especially, be-cause data has become important value assets to organisations,which they build their business model on and will go a greatlength to collect [3, 37]. Software developers, engineers anddesigners in these organisations are constantly making valuejudgements, choosing what is morally right and wrong in everydecision, counterbalancing different interests and the organisa-tion’s priorities [8, 19, 33]. These judgements are hidden insoftware and complex infrastructures, making them invisiblefor users of the organisations’ products. This invisibility factorcan be exploited for unethical abuse, but more importantlyit makes ethical choices and dilemmas, inherent in computa-tional products, difficult to reason about for users [33]. Whentalking about ethics in this paper, we will refer to these moraldecisions made by individuals and organisations, sometimesreferred to in literature as micro-ethics [19]. The implicationsof these moral decisions have only become magnified withthe propagation of ML into everyday technologies, which hasbeen articulated in the ML community in relation to fairness,accountability, transparency, nudging etc. [1, 47].

The ML Ethics Workshop is designed with the aim of makingthe moral issues and decisions understandable and explorablefor students. Similar to many existing CT and fabricationtools [5, 24], our focus is to make students more capable ofunderstanding today’s technology mediated society. However,where others focus on making computational concepts under-standable [13] and make technologies into powerful tools forstudents or co-create meaningful futures [22], our focus dif-ferent. Our aim is specifically on making technology-closeethical dilemmas in ML-based computational systems under-standable, and to allow students to reflect on the choices em-bedded in the products and services they use everyday.

METHODThe work presented here uses the Constructive Design Re-search (CDR) methodology [27, 10, 28] to investigate howto make ethical dilemmas and choices inherent in computa-tional artefacts understandable and explorable for high schoolstudents. We hypothesised (in the CDR-sense, see [28, 2]),that high school students would be able to have meaningful,technology-close discussions about ML ethics, by designing,reflecting on, and redesigning their own ML systems. In CDR-projects, hypotheses are instantiated through the creation ofartefacts [28, 2, 42], and knowledge-creation is driven by ex-periments with and exploration of these artefacts [27, 2]. Forthis work, the center artefact is the ML Ethics Workshop, in-cluding cards and other hand-outs, which is presented below.Throughout the process, we have explored how different it-erations of the workshop could be used to explore the abovegoal and the findings presented below come from our experi-ences with the different workshops. As denoted by Koskinenet al. [26] CDR may be accountable for several concerns oftheory and practice. In this work we are concerned with explor-ing and expanding the scope of CT research, with exploringhow design can be used in learning situations as well as with

producing a concrete design method for making ML under-standable and explorable to high school students. As such,we see the ML Ethics Workshop as a contribution in itselfand as our main way for exploring the space of using designprocesses for teaching ML.

MACHINE LEARNING ETHICS WORKSHOPThis section describes the current version of the the ML EthicsWorkshop, which guides high school students through anethics-first design process using cards and boards guide theirprocess. The workshop is designed to be used in all kinds ofhigh school classrooms and it is not expected, that studentshave a special interest in, or knowledge about ML. It has beendeployed in one and a half to three hours interventions, but webelieve it can scale up to a full day workshop. In groups, stu-dents design and describe a ML application to help themselvesand their peers in their everyday lives. Halfway through theprocess students are challenged with the ethical implicationsof implementing their application in a real world setting. To ad-dress these implications, they must redesign their application,dealing with ethical issues, which only become more complexas they discuss them, and weight these against the functionalityand experience of their application. Throughout the workshop,we work specifically with supervised, classification-based ML.

This section will first present the rationale behind the work-shop, then describe the cards and boards used in the kit and,last, thoroughly describe the current format of the workshop.

Design RationaleThe workshop guides students through a design process wherethey confront the moral choices and ethical dilemmas of de-signing and implementing their own ML applications. First,students explore, through a design process, how ML can im-prove their own and their peers’ lives. Here, students are askedto be as specific as possible in describing what data their ap-plications uses, and what the ML component is predicting andhow it is trained. This is to ensure that later ethical discus-sions about their ML applications can be used when makingspecific choices and addressing dilemmas in the design andimplementation of their application. When students have awell described ML application, which they believe can helptheir peers, the workshop changes character to focus on theethical issues related to the application they just designed. Bydiscussing questions about privacy, explainability, accountabil-ity, etc. students identify the most critical ethical aspects inthe design and implementation of their application. In thisdiscussion students will identify the ethical decisions theyhad already consciously or unconsciously made, and whatquestions they must further answer in their design process,before they can morally answer for their application. To letstudents experience, how these ethical questions often can leadto individual value judgements or choosing between differentundesirable outcomes, they are tasked with redesigning theirapplication to approach one or more of the current most criticalissues. In this task, they will have to make choices, deciding onwhat is most important, striving for a functional and morallyaccountable application, which often turns out to be difficult.Last, students present their applications, and how they haveapproached ethical issues in the design and implementation,

followed by a classroom discussion. In these discussions thereare no external ’bad guys’ to blame, only students themselvesand their arguments for their design choices.

Design ArtefactsThree decks of cards and a board for describing a ML sys-tem are used by the students in the workshop. The threedecks of cards consist of respectively 14 data cards, 9 ethicscards, and 26 people cards, see figure 2, all sized 6X9cmand colour-coded to communicate which deck they belong to.The data deck is used to analyse a context for possible datasources, exposing to the participants how data can be foundeverywhere, and to provide examples of a variety of differenttypes of data and data sources. Each card describes a cate-gory of data sources (e.g. health data, news, users locations)and provides a few examples of specific data sources in thecategory (e.g. pulse, breaking news, time at a location) tohelp participants understand the category. The people deck isused to analyse the context in which the application will beimplemented with focus on people, with each card describinga potential stakeholder (e.g. a colleague, a sibling, teacher,etc.). Participants use this card deck to identify who the ap-plication may affect, and how these people can be includedin the considerations about the design of the application. Theethics deck is used for reflecting on ethical implications ofimplementing a ML-system into a context, asking the partic-ipants to consider ethical ML issues [1] (e.g. explainability,privacy, accountability), Each card asks the participants oneor two questions to help frame a discussion around the issuein relation their design and ethics, e.g. "What happens if yoursystem makes a bad decision? Who is accountable?"

The ML board, see figure 3, is an A3 board with a visualisationof a supervised ML-model, leaving blank fields for the studentsto fill in when describing the ML system in their idea, forcingthem to be specific about, which data their system is usingand what it is predicting. Participants first describe their ideaand move on to describe the ML system: What the data intheir system describes (e.g. a student, a kick to a ball, a dish),which data the system learns to predict (e.g. the students grade,the precision of the kick, calories in the dish) and up to fourdata sources, the system will use to make these predictions(absence, accelerometer data, weight). Last, they name theirmodel, so they can reference to it later in the design process.

The Workshop FormatThe workshop supports groups of participants to design a MLsystem and reflect on the ethical implications of implementingit into the world. It consists of nine steps: A short introductionto ML, a presentation of a case, six activities the groups areasked to perform, a joint presentations and discussion of theparticipants designs. The nine steps are described beneath:

1. ML introduction: A short introduction to supervised MLlearning. Students were introduced to the predictive capa-bilities of ML, discussed a specific use of ML i.e. using MLin democratic elections, and gained hands-on experiencewith ML through an interactive ML-learning tool [25].

2. Case presentation: A short description of a narrative, whichframes the design process with a context, stakeholders and

Data

Interactions

How people interact with each

other

Who do they talk to? For how

long? Where? Data

Smart homeSensors in people’s homesWhen are they home? How

much power do they use? How much TV do they watch?

Ethics

Data Collection

Are users and others aware

that data is collected about

them?

Are they aware of how the data

is used?

Ethics

VisibilityIs your system visible when you meet it?

Is the use of Machine Learning visible in your system?

People

Grandparents

People

Siblings

Figure 2: Two examples of (from left two right) data, ethics and people cards. Translated to English from the original language.

What does the data de-scribe

What data should the model be trained to pre-dict?

What data input is required to make the prediction?

Data input Data input

Description of idea

Name of the model

Figure 3: Board for describing ML system. Originally in [LANGUAGE]and annotated in English.

an overall goal for the products the participants will bedesigning. It may also restrict the product design itself(e.g. it must be and app or a wearable). It situates ML instudents’ own life and let them explore, how ML can solveproblems, which they find important and can provide newopportunities, which are valuable to them.

3. Analysis of context for data: Each group choose a few cardsfrom the data deck and use them to analyse the case contextfor possible data sources.

4. Ideation: Based on the case description and the explorationof data sources, each group conduct a short IDEO styleideation [32], where they come up with as many ideas aspossible. At the end, each group are asked to choose thebest idea (or combine multiple ideas into one great idea).

5. Description of ML system: Each group use the ML boardto describe the ML component of their idea. It helps par-ticipants become very specific about their use of data andensures all ideas are based on ML. If it is not possible for agroup to fill the board, they need to go back and revise theiridea.

6. Reflections on ethics: Each group are dealt an ethics cardand the whole Human Card-deck, which they use the toreflect on possible implications for different actors of imple-menting their ML application into the real world, and howit will affect different actors. They choose the one issue,they find most critical about their system.

7. Redesign product: Each group discuss how they can addressthe ethical dilemmas in the design of their products anddescribe in detail, through sketches, texts, etc., how theirsystem should be redesigned to address this issue.

8. Presentation: Each group prepare a one minute presentation,which describes their concept, their ethical issue and howthey approached it in their design. Every group make theirshort presentation followed by a short applause from theother groups.

9. Discussion: The students’ designs are used for a jointdiscussion about ethics, grounded by the ethical problemsthe participants have faced in the design process and howthey tried to solve them.

All groups are given a limited amount of time to completeeach activity to ensure progression in the design processes.Activities are presented and described just before they startto make participants focus on the given activity and not thewhole process. The internal collaboration in the groups is leftinformal, but mediated by the artefacts.

Intervention #1 #2 #3 #4Length 3 hrs 1.5 hrs 2 hrs 2 hrsLocation University Own classroom University University

Background Technology& design Social studies IT Informatics

No. students 27 25 5 14

Table 1: Overview of the different interventions as well as the classroomswhom participated in them.

DESIGNING THE ML ETHICS WORKSHOPThis section shortly describes the purpose, design rationale andthe insights from each intervention that influenced the designof following version of the ML Ethics Workshop. The findingsfrom the workshop will be presented in the following section.The ML Ethics Workshop was developed in an iterative CDRprocess and each iteration of the workshop was deployed ina high school classroom. A total of 71 students, aged 16-20,participated in the interventions. In total, four interventionswas made in four different classrooms with different back-grounds and from different high schools. All classrooms werevolunteered by their teacher and participated as part of theirhigh-school education. Each student was, however, consentedindividually according to local regulations. Intervention 2took place at a local high school. All other interventions tookplace in a large room at the university of the researchers. Anoverview can be found in Table 1.

All iterations of the workshop followed the same basic struc-ture; students were given an introduction to ML, next theywere given a design case to work on and were split into groupsof 3-5 people to do so. After working through different designphases, students presented their ML application and the ethicalissues they had been discussing. Table 2 provides an overviewof the artefacts used in each intervention. In the following, theartefacts will be referred to by their name in the table.

Intervention 1: Testing An IdeaIn the first intervention, we tested the first version of the EthicsWorkshop, which were our initial approach on how to designa workshop which stages ethical dilemmas. The workshopwas inspired by how the Tiles Toolkit [34] uses different tech-nology and interaction cards, a board where the cards can beorganised and a playbook to guide a design process, and howEnvisioning Cards Toolkit [9] stages discussions about humanvalues through focused design activities. We brought contextcards with different design context and possible directions forthe design process. The context were based on students’ ev-eryday lives, e.g. sport, dating, school. To scaffold the designprocess, we brought six different card decks (see figure 2),post-its and A3 boards. The main idea was, that studentsshould come up with an idea for a ML application, that wasvaluable to them, use the application to discuss ethical issuesof ML systems and address some of the ethical issues in theinteraction design of the application.

Insights:Students were engaged throughout the workshop, and seemedmotivated by the many cards and the possibility to come upwith their own ideas. The amount of cards did, however, seemto hinder their reflections about the activities, as students were

more focused on sorting and getting through all cards in eachdeck, than discussing a few cards in depth. The ML cards didnot support the students very well in describing a ML system;they seemed to need more structure to design meaningfulML systems. As a result the ethical discussions often lackedtechnical depth. Students were good at identifying overallethical issues, but they perceived their own ML applicationsas either morally good or bad, based on their own intentionsin the design process, with nothing much to do about it.

Intervention 2: Describing a ML systemIn the second intervention, we provided additional scaffoldingfor the students’ design of an ML systems through an intro-duction to the ML board, which can be seen in Figure 3. Itvisualises a supervised ML system, which students must fillin based on their ML application. Furthermore, the workshopwas simplified to only contain data, ethics and people cards tomake more room for immersion and reflection.

Insights:This intervention was conducted with a social studies class-room, and they struggled with the openness of the design case,which was based on the context cards (see Table 2 for an ex-ample) and the requirement of using ML. Many groups spentmost of the time to decide on a idea. Few groups were able tofill the ML board correctly, but the groups who succeeded werebetter at reflecting about the ethical issues in their application.

Intervention 3: Combining Design and Ethical ReflectionThe third workshop explored how to make more room forethical discussions, and how to expose the ethical choicesearly in the design process. To simplify the workshop further,the people deck was removed and student groups picked onlytwo data cards and were provided with one ethics card. Thiswas intended to make more room for discussing each cardin depth and to ensure, that each group worked with differ-ent types of data and ethical problems, making the classroomdiscussions more diverse. To ease the ideation and concep-tualisation process for the students, the context cards werereplaced with a single design case about helping lonely fel-low students, providing their ML application with a specificpurpose with societal relevance, and limiting the design spaceto mobile applications. Last, students were asked to redesigntheir system using wireframe sketches of their application toencourage them to become more specific in their design. Theywere asked to identify the most critical ethical issue in theirapplication and sketch, how they would address this in theinterface.

Insights:This workshop was conducted with only five students, whoworked in a single group, but this version of the workshop wasalso used in intervention 4 with only limited changes. Themore delimited design case and fewer data cards seemed tohelp students generate more well-described ideas, making iteasier for them to fill in the ML board. The group’s ethicaldiscussions had more depth than any group discussions inearlier workshops, and dealt with the complexity of comingup with good solutions to ethical problems.

Artefact Example #1 #2 #3 #4Ethics Cards See figure 2, left X X X XData Cards See figure 2, middle X X X XHumans Cards See figure 2, right X X X

Context Cards

"Gaming: How can [ML] be used to...- improve you performance?- create better habits?- strengthen you friendships?..."

X X

ML Board See figure 3 X X X

ML Cards"Feature: A characteristic of the phe-nomenon we observe.The independant variable."

X

Output Cards "Emoji: Use emojis to express emo-tional outputs." X

Interaction Cards "Smart Watch: Use a smart watch tointeract with the system." X

Table 2: An overview of the different artefacts used throughout the different versions of the ML Ethics Workshop. The right side of the table describesin which itervention each artefact was used.

Intervention 4: Validating The ConceptIn the fourth and final intervention, we tested the compositionof intervention 3 with more groups. The only change being,that the people cards were added to the kit again to supportthe ethical discussion in being centred around the people, whowould be affected by the introduction of students’ applications.

Data Collection & AnalysisDesigning workshops for discussing ethics in computationalproducts in high school classrooms is a rather unexploredarea. Therefore, data has been collected and processed withan open ended approach. Throughout all workshops, data wascollected using observations, sound recordings, photography,video and by collecting students’ produced artefacts; products,e.g., paper-based mock-ups of their designs. All participantsin the workshops were asked for their consent, and only dataabout consented participants was collected. After each inter-vention, a write-up [30] of field-notes and observations wasproduced and discussed between the researchers present at theintervention. The write-ups focused on, how students used thedesign artefacts in the design process, how they approachedeach step in the workshop, how they talked about and un-derstood machine learning, and how they discussed ethics inrelation to their idea. Selected audio recordings of studentsinternal discussions were transcribed and analysed with focuson, how students identified and talked about ethical issues, andhow they came to an agreement, on how to approach ethicalissues in their design. Photography and video were reviewedinformally with focus on, how students collaborated aroundthe design artefacts, and how they presented ethical issues intheir application for the classroom. The artefacts producedby students were analysed with focus on, how they describedtheir ML system using the design artefacts, and, again, howthey identified and approached ethics in their designs.

FINDINGSIn this section, we present our findings from the interventionswith regards to how they were able to engage students in

reflections about ethics related to ML-based systems. Thesefindings are synthesised from an analysis across different datatypes and workshops. The analysis was done collaborativelybetween the three authors.

Understanding of ML Served to Qualify ReflectionsThroughout different workshop iterations, we experimentedwith how students could be supported in describing ML sys-tems that conceptually and technically made sense and could,to some degree, be implemented in the real world. In Inter-vention 1, we gave students cards that represented specificparts of a ML pipeline (e.g. features, label, training), andasked them to annotate and combine cards to describe theirsystems, as seen in Figure 4. This was effective to the extendthat all groups used the cards to describe their ML system, butwe observed that they had a hard time describing meaningfulsystems, and in most cases the ML aspect of their designswere described as a magic touch that was sprinkled on top.In the example in Figure 4, a group of students designed asystem for optimising how their school’s rooms are scheduledto best fit the needs of different subjects, teachers etc. Thegroup presented the ML system as being able to "predict thebest schedule for the school", but it was unclear how exactlyML would help to solve this issue, i.e., how would previousschedules be evaluated as better or worse and what data wouldthis be based on? In the subsequent discussions on ML ethics,this became a hindrance to the students, who were only ableto shallowly discuss the issues of the system. To deal withpossible issues about algorithmic decision making, their sys-tem would schedule classes and study trips "in a humane andstress free way", which is a good intention, but does not dealwith the underlying technological issues or describe, how theywill achieve this. In a similar example, a group designed aML system for helping athletes improve their workouts. Thegroup discussed whose responsibility it was, if the systemrecommended an exercise that caused an injury, but their solu-tion was to employ professional testers to prevent this fromhappening. It was, however, unclear how these testers would

prevent the system from recommending wrong exercises, evenif all exercises had been tested. And it is also unclear how thetesters would be made responsible for bad recommendations.

Following this finding the board seen in Figure 3 was intro-duced to scaffold students’ design of the ML-aspect of theirsystem. Instead of giving them the pieces for an ML-system,we provided them with the basic structure of one, and askedthem to fill it in. This approach seemed to better support thestudents in designing conceptually and technically meaningfulsystems. Not all systems were sound, and students struggledespecially with the specificity of data inputs and which data topredict. In contrast to the first workshop, many students were,however, able, at least to some extend, to create meaningfulML-systems. In the subsequent discussions on ethics, thesestudents were able to have more detailed discussions abouttheir system’s ethical issues, e.g., In the third intertvention,a group designed an app for recommending social groups tolonely teenagers based on their interests. The students weregiven an ethics card, which questioned the responsibility of thealgorithmic decisions in their system, and based on their de-sign, they were able to have in-depth and specific discussionsabout responsibility:

Student 1: "What if a group, where something is goingon under the radar is added to the system by us [...] andsomething bad happens in that club, because we didn’tknow there was a problem with them. Whose fault is itthen?"Student 2: "It is still the responsibility of the club"Student 1: "But we brought them into our system..."Students 3: "I think, those who are responsible for theapp are to blame. And I’m thinking that there is alwayssomeone in charge. That’s a director’s job"Student 4: "Okay, that is the Minister of Health. So itshould be the job of the Minister of Health or what?"(TheMinistry of Health, was the owner of the application inthe design case)Student 1: "I don’t think we should take it that far up.We need a specific division to monitor groups."[continued discussion]Student 3: "Then we should have a rating system orthe ability to report groups, and then someone shouldreview them afterwards [...] Someone should always beto blame."

Their discussion illustrates, how it is difficult to place respon-sibility once something goes wrong in ML systems, and as isdiscussed below, these students were able to design a systemthat (in a basic way) addressed this issue.

The Design Process Tied Ethics to Design DecisionsWe found that by tightly controlling the design process, stu-dents were supported in connecting their discussions aboutethics to their designs, and they were able to work on use theirdiscussions in the redesign of their systems.

This was, however, less the case in the first interventions. InIntervention 1, a group of male students designed a ML system

Figure 4: A ML System, designed by students in intervention 1, for opti-mising scheduling and booking rooms in a school.

to rate and sort women according to their breast size. Thisshowcased (in a rather extreme way) the approach taken bymany students when designing their ML systems in the firstinterventions. While, we provided contexts for the students inIntervention 1 and 2, many students came up with ideas, thatthey found interesting for themselves, but without considering,how they would impact others, and if they (and others) wouldactually benefit from the idea. In this way the context of"Social Media" became the breast-application and the contextof "Urban Life" became "Club Counter" where an AI wouldrecommend clubs based on its clientele. These groups didnot do well in discussing the ethical aspects with regards totheir designs. They could easily identify ethical issues in theirdesigns, but it was difficult to address them in their redesigns,as they could not account for why their application shouldexist in the first place. Indeed, the group wanting to ratewomen simply answered "No" with no further elaboration tothe question: "Is it ethical to implement this?" when asked toreflect on the ethical aspects of their design.

To support students in the design of systems that could bemeaningfully discussed (and were not degrading) we intro-duced the specific case about helping lonely fellow students inIntervention 3, and asked students to consider and sketch theways in which their system addressed their discussions aboutethics. This approach seemed to better tie together the stu-dents’ design process and their ethics discussions, e.g. in thegroup described above, whom designed a system for recom-mending groups for lonely teenagers. From their discussion,they decided to implement a system for rating and reportinggroups (see Figure 5). While this is a traditional (and poten-tially harmful) way of dealing with responsibility, the studentswere able to identify the issues in their discussion (see above)and design a solution addressing the issue.

In Intervention 4, a group designed a similar app for predicting"soon-to-be" lonely people and pairing them up with eachother. This group discussed the ethics of data collection, andthe dilemma of the app "giving away other people’s loneliness"

Figure 5: Wireframe sketches of an app, designed by students in inter-vention 3, for recommending groups to lonely teenagers based on theirinterests.

by pairing them with strangers (see Figure 6). This groupdesigned the app to include a "My Data" page, in which theuser can see exactly which data is given away, and can chooseto remove data, they do not want the app to include in itsrecommendations.

Challenging to Address Ethical Dilemmas in DesignThroughout the interventions we found, that students struggledwith addressing the ethical dilemmas they had discussed intheir design. Students, who were able to have qualified andinteresting discussions about ML ethics, found it difficult tocome up with good solutions to their issues, and often de-signed systems in which the user (themselves and their peers)were responsible for system faults or data collection. Manystudents were trapped in a dilemma of privacy; that their ap-plication and the quality of their solutions depended on accessto personal data from their users, and for groups who had dis-cussed the ethics of data gathering, the design solutions wasoften akin to traditional "Terms and Conditions"-agreementsthat most applications and websites now have (see Figure 6).As described above, some let users control what data wouldbe included in predictions, even if this would lead to worserecommendations. In Intervention 3, the group who reflectedon accountability in their application for lonely people, alsodiscussed privacy issues and concluded that users should be in-centivised to provide as much data as possible since this wouldprovide better recommendations or as one of the students putit: "The more lonely they are, the more motivated they becometo get better recommendations". Once again, this illustrates adilemma, choosing between different interests, that is presentin real-life ML applications.

Teenagers Test BoundariesThroughout the interventions, we found that students wouldoften test our and their peers’ boundaries. The system for rat-ing women by breast size is a glaring example, and throughoutinterventions, many students used rather extreme examplesto illustrate their considerations, such as illustrating a system

Figure 6: An app, designed by students in intervention 4, for predictingloneliness an pairing people before they become lonely. The sketchingshow how users must agree to the app’s terms and conditions, but canalso choose what data the app bases its recommendations on.

failure by suggesting that a 6-year old user would be recom-mended to join a sex-club or discussing what would happenif a terrorist group accidentally ended up in their application.One immediate response could be to stop the students in theirtrack and discipline them, but we wonder if this would beappropriate, keeping in mind our goal of staging reflections.Instead we found it helpful to provide a more specific casefor the students, to help them in designing systems that wouldbetter support their discussions.

DISCUSSIONAs argued above, many existing approaches for using designprocesses in CT focus on empowering students to becomecreators [36] or to discuss broad, societal implications of tech-nology [22]. These approaches often neglect discussions thatare both close to technology and to its implications. In this sec-tion we reflect on our workshop format, based on the findingsfrom the interventions, and discuss how it can inform futureworkshops or other activities with similar objectives of stagingreflections on ethical dilemmas in computational systems andproducts.

First, we found it helpful to frame the entire design process in away that helps students design ethically interesting systems. Ininterventions 1 and 2, students were to, a large degree, respon-sible for framing their task and were only provided the contextcard (see Figure 2) to help them. Starting from workshop 3,we provided students with a case about loneliness, which is anactual issue among [NATIONAL] teenagers, meaning that stu-dents were designing for a real and vulnerable demographicand were encouraged to consider how this demographic isaffected by their design choices.

Second, the focus of the design process should be moved fromthe possibilities of a technology and towards implications ofimplementing said technology in real-world settings. We sawthat when we supported students in designing implementableML systems, they seemed to have more insightful discussions

about its implications. One reason for this, we argue, might bethat design processes are typically future-oriented in the sensethat we as designers are interested in what-could-be. However,for a technology-close discussion of the implications of aspecific technology, it is necessary to have a look at the specifichere-and-now issues of a technology. In turn, this implies achange in the criteria for success of such a design process. Ourinclination as designers was to look at the product of students’design processes; were they able to successfully communicatean idea or a design, did they address the case provided tothem, are they solving an actual issue? These are questionsthat might be used to evaluate typical design processes, butas the goal has changed, we argue, so should the questions.Instead of focusing on prototyping as an end in itself, wesuggest seeing insightful discussions about ethics as the goal,and prototyping as a means to ground these discussions insomething concrete. We suggest asking questions akin to thefollowing to evaluate if the design process was successful;are students able to formulate a technically feasible system;can they identify how their idea and design choices relateto specific, technology-close ethical issues; are they able todiscuss these with their own idea as the point of departure;and can they address their considerations when realising ormodifying their idea?

LIMITATIONSSince students came from different backgrounds, and manystudents were already interested in technology their existingknowledge of ML varied. In addition, because of the dif-ferences in length of the workshops (see Table 1), studentsreceived slightly different introductions to ML and studentsparticipating in Intervention 2 had already had a longer intro-duction to ML [25].

CONCLUSIONIn this paper, we have explored how ethical dilemmas andchoices in ML applications can be made understandable andexplorable for students. To do so, we designed a workshop for-mat, in which students confront the ethical choices and dilem-mas of designing and implementing their own well-intentionedML applications. To realise this concept, we conducted fourinterventions with different Scandinavian high school classes,bringing a new iteration of the workshop to each intervention.

Experiences with the workshop format illustrates how an un-derstanding of fundamental ML and iterating on own designideas helped to qualify students ethical reflections. In addition,the design process served to reveal the complexity of the ethi-cal dilemmas and tie them to the properties of the technologyand to design-decisions. Grounding ethical discussions in stu-dents own designs, made them accountable for their choicesand illustrated, how difficult it is to come up with clear-cutsolutions to these ethical issues. Based on these insights, werecommend that in order to scaffold technology-close ethicaldiscussion, design processes should focus less on technologi-cal possibilities and more on implications and consequences,and have presented our design workshop as an approach fordoing so.We hope, that future research in CT will address the ethics incomputational products and systems and aim make technology-close ethical dilemmas and decisions explorable for students.

REFERENCES[1] Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y.

Lim, and Mohan Kankanhalli. 2018. Trends andTrajectories for Explainable, Accountable andIntelligible Systems. In Proceedings of the 2018 CHIConference on Human Factors in Computing Systems -CHI ’18. ACM Press, New York, New York, USA, 1–18.DOI:http://dx.doi.org/10.1145/3173574.3174156

[2] Anne Louise Bang, Peter Gall Krogh, Martin Ludvigsen,and Thomas Markussen. 2012. The Role of Hypothesisin Constructive Design Research. In The Art of Research2012: Making, Reflecting and understanding. Helsinki,1–11.

[3] Ann Brown. 1992. Design Experiments : Theoreticaland Methodological Challenges in Creating ComplexInterventions in Classroom Settings. The journal of thelearning sciences 2, 2 (1992), 141–178.

[4] Geoffrey Brown. 1990. The information game: Ethicalissues in a microchip world. (1990).

[5] Eva Eriksson, Ole Sejer Iversen, Gökçe Elif Baykal,Maarten Van Mechelen, Rachel Smith, Marie-louiseWagner, Bjarke Vognstrup Fog, Clemens Klokmose,Bronwyn Cumbo, Arthur Hjorth, Line Have Musaeus,Marianne Graves Petersen, and Niels Olof Bouvin. 2019.Widening the scope of FabLearn Research : IntegratingComputational Thinking , Design and Making. InFabLearn Europe ’19.

[6] Daniel Fitton, Janet C C. Read, and Matthew Horton.2013. The challenge of working with teens asparticipants in interaction design. In CHI EA ’13 CHI

’13 Extended Abstracts on Human Factors in ComputingSystems. ACM Press, Paris, France, 205. DOI:http://dx.doi.org/10.1145/2468356.2468394

[7] Tom Forester and Perry Morrison. 1994. Computerethics: cautionary tales and ethical dilemmas incomputing. Mit Press.

[8] Christopher Frauenberger, Marjo Rauhala, andGeraldine Fitzpatrick. 2016. In-Action Ethics.Interacting with Computers 29, 2 (1 2016), 220–236.DOI:http://dx.doi.org/10.1093/iwc/iww024

[9] Batya Friedman and David G. Hendry. 2012. TheEnvisioning Cards: A toolkit for catalyzing humanisticand technical imaginations. Conference on HumanFactors in Computing Systems - Proceedings (2012),1145–1148. DOI:http://dx.doi.org/10.1145/2207676.2208562

[10] William Gaver. 2012. What Should We Expect fromResearch through Design?. In Proceedings of theSIGCHI Conference on Human Factors in ComputingSystems (CHI ’12). Association for ComputingMachinery, New York, NY, USA, 937–946. DOI:http://dx.doi.org/10.1145/2207676.2208538

[11] Saul Greenberg, Sebastian Boring, Jo Vermeulen, andJakub Dostal. 2014. Dark patterns in proxemicinteractions: A critical perspective. Proceedings of theConference on Designing Interactive Systems:Processes, Practices, Methods, and Techniques, DIS(2014), 523–532. DOI:http://dx.doi.org/10.1145/2598510.2598541

[12] Jean M. Griffin. 2018. Constructionism andDe-Constructionism as Complementary Pedagogies. InConstructionism 2018. 225–237.

[13] Shuchi Grover and Roy Pea. 2013. ComputationalThinking in K–12. Educational Researcher 42, 1 (2013),38–43. DOI:http://dx.doi.org/10.3102/0013189X12463051

[14] Jonathan Grudin. 2017. AI and HCI: Two Fields Dividedby a Common Focus. AI Magazine 30, 4 (9 2017), 48.DOI:http://dx.doi.org/10.1609/aimag.v30i4.2271

[15] Kim Halskov and Peter Dalsgård. 2006. Inspiration cardworkshops. In Proceedings of the Conference onDesigning Interactive Systems: Processes, Practices,Methods, and Techniques, DIS, Vol. 2006. 2–11. DOI:http://dx.doi.org/10.1145/1142405.1142409

[16] Erica Rosenfeld Halverson and Kimberly Sheridan.2014. The Maker Movement in Education. HarvardEducational Review 84, 4 (2014), 495–504. DOI:http://dx.doi.org/10.17763/haer.84.4.34j1g68140382063

[17] Elin Irene Krog Hansen and Ole Sejer Iversen. 2013.You are the real experts!: Studying teenagers’motivation in participatory design. In IDC ’13Proceedings of the 12th International Conference onInteraction Design and Children. ACM Press, NewYork, New York, USA, 328–331. DOI:http://dx.doi.org/10.1145/2485760.2485826

[18] Idit Ed Harel and Seymour Ed Papert. 1991.Constructionism. Ablex Publishing.

[19] Joseph R. Herkert. 2005. Ways of thinking about andteaching ethical problem solving: Microethics andmacroethics in engineering. In Science and EngineeringEthics, Vol. 11. Kluwer Academic Publishers, 373–385.DOI:http://dx.doi.org/10.1007/s11948-005-0006-3

[20] Ole Sejer Iversen, Christian Dindler, and ElinIrene Krogh Hansen. 2013. Understanding teenagers’motivation in participatory design. International Journalof Child-Computer Interaction 1, 3-4 (9 2013), 82–87.DOI:http://dx.doi.org/10.1016/j.ijcci.2014.02.002

[21] Ole Sejer Iversen and Rachel Charlotte Smith. 2012.Scandinavian participatory design. In Proceedings of the11th International Conference on Interaction Designand Children - IDC ’12. ACM Press, New York, NewYork, USA, 106. DOI:http://dx.doi.org/10.1145/2307096.2307109

[22] Ole Sejer Iversen, Rachel Charlotte Smith, and ChristianDindler. 2018. From computational thinking to

computational empowerment. In PDC ’18 Proceedingsof the 15th Participatory Design Conference: FullPapers - Volume 1. ACM Press, Hasselt and Genk,Belgium, 1–11. DOI:http://dx.doi.org/10.1145/3210586.3210592

[23] Yasmin Kafai, Chris Proctor, and Debora Lui. 2019.From theory bias to theory dialogue: Embracingcognitive, situated, and critical framings ofcomputational thinking in K-12 Cs education. In ICER2019 - Proceedings of the 2019 ACM Conference onInternational Computing Education Research. 101–109.DOI:http://dx.doi.org/10.1145/3291279.3339400

[24] Yasmin B Kafai and Mitchel Resnick. 2012.Constructionism in practice: Designing, thinking, andlearning in a digital world. Routledge.

[25] Magnus H Kaspersen and Karl-Emil Kjær Bilstrup.2020. (in press) VotestratesML: Civics as a Vehicle forTeaching Machine Learning. In Constructionism.

[26] Ilpo Koskinen and Peter Gall Krogh. 2015. DesignAccountability: When Design Research EntanglesTheory and Practice. (2015). http://www.ijdesign.org/index.php/IJDesign/article/view/1799

[27] Ilpo Koskinen, John Zimmerman, Thomas Binder,Johan Redstrom, and Stephan Wensveen. 2011. DesignResearch Through Practice: From the Lab, Field, andShowroom. Number 3. Elsevier. 56–58 pages.

[28] Peter Gall Krogh and Ilpo Koskinen. (in press). Driftingby Intention: Four Epistemic Traditions from withinConstructive Design Research (1 ed.). SpringerInternational Publishing. 162 pages.

[29] Anna Mavroudi, Monica Divitini, Francesco Gianni,Simone Mora, and Dag R. Kvittem. 2018. DesigningIoT applications in lower secondary schools. IEEEGlobal Engineering Education Conference, EDUCON2018-April (2018), 1120–1126. DOI:http://dx.doi.org/10.1109/EDUCON.2018.8363355

[30] Matthew B. Miles and A. Michael Huberman. 1994.Qualitative Data Analysis: An expanded sourcebook(3rd ed.). SAGE Publications, London, UK.

[31] Preben Holst Mogensen and Randall H. Trigg. 1992.Using Artifacts as Triggers for Participatory Analysis.DAIMI Report Series 21, 413 (8 1992). DOI:http://dx.doi.org/10.7146/dpb.v21i413.6726

[32] Bill Moggridge and Bill Atkinson. 2007. Designinginteractions. Vol. 17. MIT press Cambridge, MA.

[33] James H Moor. 1985. What is computer ethics?Metaphilosophy 16, 4 (1985), 266–275.

[34] Simone Mora, Francesco Gianni, and Monica Divitini.2017. Tiles: A card-based ideation toolkit for theInternet of Things. DIS 2017 - Proceedings of the 2017ACM Conference on Designing Interactive Systems(2017), 587–598. DOI:http://dx.doi.org/10.1145/3064663.3064699

[35] Sofia Papavlasopoulou, Michail N. Giannakos, andLetizia Jaccheri. 2017. Empirical studies on the MakerMovement, a promising approach to learning: Aliterature review. Entertainment Computing 18 (1 2017),57–78. DOI:http://dx.doi.org/10.1016/j.entcom.2016.09.002

[36] Seymour Papert. 1980. Mindstorms : children,computers, and powerful ideas. Basic Books. 230 pages.https://dl.acm.org/citation.cfm?id=1095592

[37] Frank Pasquale. 2015. The Black Box Society: TheSecret Algorithms That Control Money and Information.Harvard University Press.http://www.jstor.org/stable/j.ctt13x0hch

[38] Robin Roy and James P. Warren. 2019. Card-baseddesign tools: A review and analysis of 155 card decksfor designers and designing. Design Studies 63, Figure 1(2019), 125–154. DOI:http://dx.doi.org/10.1016/j.destud.2019.04.002

[39] Ben Shneiderman, Catherine Plaisant, Maxine Cohen,Steven Jacobs, Niklas Elmqvist, and NicholoasDiakopoulos. 2016. Grand challenges for HCIresearchers. Interactions 23, 5 (2016), 24–25. DOI:http://dx.doi.org/10.1145/2977645

[40] Shourya Pratap Singh, Ankit Kumar Panda, SusobhitPanigrahi, Ajaya Kumar Dash, and Debi Prosad Dogra.2018. PlutoAR: An Inexpensive, Interactive AndPortable Augmented Reality Based Interpreter For K-10Curriculum. arXiv:1809.00375 [cs.HC] (2018). DOI:http://dx.doi.org/arXiv:1809.00375v2

[41] Rachel Charlotte Smith, Ole Sejer Iversen, and MikkelHjorth. 2015. Design thinking for digital fabrication ineducation. International Journal of Child-ComputerInteraction 5 (9 2015), 20–28. DOI:http://dx.doi.org/10.1016/j.ijcci.2015.10.002

[42] Pieter Jan Stappers and Elisa Giaccardi. 2017. Researchthrough Design. In The Encyclopedia ofHuman-Computer Interaction, 2nd edition, M. Soegaardand R. Friis-Dam (Eds.).

[43] David Touretzky, Christina Gardner-McCune, FredMartin, and Deborah Seehorn. 2019. Envisioning AI forK-12 : What should every child know about AI ?. InAAAI.

[44] Ari Tuhkala, Marie-Louise Wagner, Nick Nielsen,Ole Sejer Iversen, and Tommi Kärkkäinen. 2018.Technology Comprehension. In FabLearn Europe’18.72–80. DOI:http://dx.doi.org/10.1145/3213818.3213828

[45] Kirsikka Vaajakallio and Tuuli Mattelmäki. 2014.Design games in codesign: as a tool, a mindset and astructure. CoDesign 10, 1 (2014), 63–77. DOI:http://dx.doi.org/10.1080/15710882.2014.881886

[46] Jeannette M. Wing. 2006. Computational thinking.Commun. ACM 49, 3 (2006). DOI:http://dx.doi.org/10.1145/1118178.1118215

[47] Karen Yeung. 2017. ‘Hypernudge’: Big Data as a modeof regulation by design. Information Communicationand Society 20, 1 (2017), 118–136. DOI:http://dx.doi.org/10.1080/1369118X.2016.1186713