Designing a Future Worth Wanting: Applying Virtue Ethics to HCI

13
Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Tim Gorichanaz [email protected] Drexel University Philadelphia, Pennsylvania, USA ABSTRACT Out of the three major approaches to ethics, virtue ethics is uniquely well suited as a moral guide in the digital age, given the pace of sociotechnical change and the complexity of society. Virtue ethics focuses on the traits, situations and actions of moral agents, rather than on rules (as in deontology) or outcomes (consequentialism). Even as interest in ethics has grown within HCI, there has been little engagement with virtue ethics. To address this lacuna and demonstrate further opportunities for ethical design, this paper provides an overview of virtue ethics for application in HCI. It reviews existing HCI work engaging with virtue ethics, provides a primer on virtue ethics to correct widespread misapprehensions within HCI, and presents a deductive literature review illustrating how existing lines of HCI research resonate with the practices of virtue cultivation, paving the way for further work in virtue- oriented design. CCS CONCEPTS Human-centered computing HCI theory, concepts and models; Interaction design theory, concepts and paradigms. KEYWORDS virtue ethics, design ethics, information ethics, value sensitive de- sign ACM Reference Format: Tim Gorichanaz. 2022. Designing a Future Worth Wanting: Applying Virtue Ethics to HCI. In Proceedings of ACM Conference (Conference’17). ACM, New York, NY, USA, 13 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn They constantly try to escape From the darkness outside and within By dreaming of systems so perfect that no one will need to be good. —T. S. Eliot, “Choruses from The Rock,” 1934 1 INTRODUCTION The story of technology is one of continually growing power, reach, speed and scale. Networked computing technologies enable us Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. Conference’17, July 2017, Washington, DC, USA © 2022 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn to affect each other more than ever before—and to do so more quickly and significantly. This trajectory has inspired countless developments, from real-time video calling worldwide to rapid vaccine development and distribution. But it has also brought along difficult sociopolitical consequences and moral quandaries, from the spread of misinformation [111] to the difficult economics of gig work [2] and on and on; and these issues present challenges for human wellbeing and flourishing in the digital age. Amidst this trajectory, the moral side of technology is becoming impossible to ignore. While technology design is often oriented toward commercial values such as productivity and efficiency [60, 96, 150], scholars in human–computer interaction (HCI) and allied fields have for decades been researching and developing methods for orienting technology design toward essential human values such as justice, reflection and meaning [e.g., 25, 37, 46, 90]. Of late, such discussions have entered the public discourse as well [50, 103, 127]. HCI researchers and designers, as well as users and other stake- holders, seem to be searching for ethical guidance for the digital age. In all fields of technology, discussions of ethics are on the rise. Commonplace are (often implicit) burning questions such as what makes life worth living and how we can make the world a better place through design. Ethics-based methods for design are prolifer- ating [31], and in universities there is a growing interest in teaching ethical approaches to sociotechnical design and evaluation [41]. The past few years have seen the inception of agencies dealing with such issues, including the Center for Humane Technology (founded in 2018) and the Sacred Design Lab (2020), as well as countless academic research centers, including the Technology Ethics Center at the University of Notre Dame (2019) and the Ethics, Technol- ogy, and Human Interaction Center at Georgia Tech (2020)—to say nothing of the myriad such initiatives that existed prior. To facilitate these efforts, continued translational work is needed to incorporate developments from moral philosophy into comput- ing and vice versa [43, 139]. In HCI, the majority of work in this vein has focused on two of the three major ethical traditions: deon- tology, which focuses on rules of behavior; and consequentialism, which focuses on outcomes of behavior. There has been a notice- able lack of meaningful engagement with the third major ethical tradition, virtue ethics. This approach to ethics focuses on the traits, situations, motivations and actions of moral agents rather than on the outcomes of actions or on rules governing action [115]. Virtue ethics explains how moral qualities are identified, how character is developed, and how moral dilemmas are negotiated in real life. Over the past decade, a number of scholars have made the case that virtue-based approaches to design are uniquely well suited to today’s sociotechnical environment [e.g., 63, 145], but HCI has not yet integrated these arguments. arXiv:2204.02237v1 [cs.CY] 5 Apr 2022

Transcript of Designing a Future Worth Wanting: Applying Virtue Ethics to HCI

Designing a Future Worth Wanting: Applying Virtue Ethics toHCI

Tim [email protected]

Drexel UniversityPhiladelphia, Pennsylvania, USA

ABSTRACTOut of the three major approaches to ethics, virtue ethics is uniquelywell suited as a moral guide in the digital age, given the pace ofsociotechnical change and the complexity of society. Virtue ethicsfocuses on the traits, situations and actions of moral agents, ratherthan on rules (as in deontology) or outcomes (consequentialism).Even as interest in ethics has grown within HCI, there has beenlittle engagement with virtue ethics. To address this lacuna anddemonstrate further opportunities for ethical design, this paperprovides an overview of virtue ethics for application in HCI. Itreviews existing HCI work engaging with virtue ethics, providesa primer on virtue ethics to correct widespread misapprehensionswithin HCI, and presents a deductive literature review illustratinghow existing lines of HCI research resonate with the practicesof virtue cultivation, paving the way for further work in virtue-oriented design.

CCS CONCEPTS• Human-centered computing → HCI theory, concepts andmodels; Interaction design theory, concepts and paradigms.

KEYWORDSvirtue ethics, design ethics, information ethics, value sensitive de-signACM Reference Format:Tim Gorichanaz. 2022. Designing a Future Worth Wanting: Applying VirtueEthics to HCI. In Proceedings of ACM Conference (Conference’17). ACM, NewYork, NY, USA, 13 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

They constantly try to escapeFrom the darkness outside and withinBy dreaming of systems so perfect that no one willneed to be good.

—T. S. Eliot, “Choruses from The Rock,” 1934

1 INTRODUCTIONThe story of technology is one of continually growing power, reach,speed and scale. Networked computing technologies enable us

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected]’17, July 2017, Washington, DC, USA© 2022 Association for Computing Machinery.ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00https://doi.org/10.1145/nnnnnnn.nnnnnnn

to affect each other more than ever before—and to do so morequickly and significantly. This trajectory has inspired countlessdevelopments, from real-time video calling worldwide to rapidvaccine development and distribution. But it has also brought alongdifficult sociopolitical consequences and moral quandaries, fromthe spread of misinformation [111] to the difficult economics of gigwork [2] and on and on; and these issues present challenges forhuman wellbeing and flourishing in the digital age.

Amidst this trajectory, the moral side of technology is becomingimpossible to ignore. While technology design is often orientedtoward commercial values such as productivity and efficiency [60,96, 150], scholars in human–computer interaction (HCI) and alliedfields have for decades been researching and developing methodsfor orienting technology design toward essential human values suchas justice, reflection and meaning [e.g., 25, 37, 46, 90]. Of late, suchdiscussions have entered the public discourse as well [50, 103, 127].

HCI researchers and designers, as well as users and other stake-holders, seem to be searching for ethical guidance for the digitalage. In all fields of technology, discussions of ethics are on the rise.Commonplace are (often implicit) burning questions such as whatmakes life worth living and how we can make the world a betterplace through design. Ethics-based methods for design are prolifer-ating [31], and in universities there is a growing interest in teachingethical approaches to sociotechnical design and evaluation [41]. Thepast few years have seen the inception of agencies dealing withsuch issues, including the Center for Humane Technology (foundedin 2018) and the Sacred Design Lab (2020), as well as countlessacademic research centers, including the Technology Ethics Centerat the University of Notre Dame (2019) and the Ethics, Technol-ogy, and Human Interaction Center at Georgia Tech (2020)—to saynothing of the myriad such initiatives that existed prior.

To facilitate these efforts, continued translational work is neededto incorporate developments from moral philosophy into comput-ing and vice versa [43, 139]. In HCI, the majority of work in thisvein has focused on two of the three major ethical traditions: deon-tology, which focuses on rules of behavior; and consequentialism,which focuses on outcomes of behavior. There has been a notice-able lack of meaningful engagement with the third major ethicaltradition, virtue ethics. This approach to ethics focuses on the traits,situations, motivations and actions of moral agents rather than onthe outcomes of actions or on rules governing action [115]. Virtueethics explains how moral qualities are identified, how characteris developed, and how moral dilemmas are negotiated in real life.Over the past decade, a number of scholars have made the casethat virtue-based approaches to design are uniquely well suited totoday’s sociotechnical environment [e.g., 63, 145], but HCI has notyet integrated these arguments.

arX

iv:2

204.

0223

7v1

[cs

.CY

] 5

Apr

202

2

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

This paper provides an overview of virtue ethics for applica-tion in interaction design. While typical work in HCI, particularlythat presented at conferences, has focused on practical applicationrather than reflective and theoretical discussions, this paper seeksto provide a bridge between the theoretical and the applied as partof an ongoing move toward deeper engagement with theoreticalissues in HCI. This paper begins by briefly defining virtue ethics,reviewing existing work in HCI that engages with virtue ethics, andoutlining an argument for adopting virtue ethics over other ethicaltheories in design. Then, with an eye toward demonstrating theapplicability of virtue ethics to HCI, the paper discusses the majortenets of virtue ethics, focusing particularly on the practices forcultivating virtue. The paper illustrates how existing work in HCIresonates with those practices, leading to opportunities for directengagement with virtue ethics based on existing lines of researchin HCI.

2 AN OVERVIEW OF VIRTUE ETHICSTo put it telegraphically, ethics is the study of the good life. Broadly,there are three main approaches to ethics—deontology, consequen-tialism, and virtue ethics—each of which separates good from badactions by focusing on a different aspect of action: deontologyfocuses on rules and duties that guide action, consequentialismfocuses on outcomes of actions, and virtue ethics focuses on certainqualities of the person doing the action [115].

Virtue ethics posits that people can achieve the good life throughcultivating specificmoral qualities, called virtues, within themselves[67]. What qualities are considered virtues is socially local, ratherthan universal; still, some virtues are upheld across cultures andepochs, such as courage, justice and honesty [84]. Besides helpingidentify the virtues, virtue ethics contends that we can becomemorevirtuous. That is, unlike certain human qualities such as height, thevirtues are not inborn or fixed; rather, we must practice and growthem. Moreover, the virtues do not exist in isolation. Virtue ethicistsspeak of a person’s character, or their virtue writ large, beyondsimply observing that a person exhibits this or that particular virtue[64]. Thus, according to most theorists, the virtues are united orintegrated through a person’s practical wisdom, i.e., the ability toact discerningly given the particulars of each situation [15]. Forexample, a would-be courageous person who is lacking in practicalwisdom may, in the end, simply be foolhardy [9]. Additionally,virtue is not just about action, but also perception, motivation andjustification; being virtuous is about doing the right thing for theright reasons [157].

As mentioned, crucially, the virtues are not fixed traits we areborn with; rather, they are practiced and honed throughout ourlife. How is this done? A recent and highly regarded account ofthis is given by Shannon Vallor in Technology and the Virtues [145].In this book, Vallor synthesizes a tremendous literature spanningthe philosophy of technology and several other philosophical tradi-tions, as well as case studies of several emerging technologies. Inparticular, she analyzes three distinct traditions of virtue ethics—Aristotelianism from Greece, Confucianism from China, and Bud-dhism from India—with the aim of developing the foundation ofa global morality for the 21st century and beyond. As part of this

work, she provides a framework of seven practices by which thevirtues are cultivated:

Moral Habituation Learning by doing, not just theorizing,and improving gradually over time

Relational Understanding Recognizing our roles and rela-tionships with others

Reflective Self-Examination Checking in on ourselves withrespect to moral aspirations

Self-Direction Choosing our goals and taking steps towardthose goals

Moral Attention Becoming aware of and tending to morallysalient facts in the environment

Prudential Judgment Choosing well among available op-tions

Extension of Moral Concern Doing good for others as wellas ourselves

Just like the virtues themselves, these practices are mutuallyreinforcing. Moreover, they are not just steps to unlock virtues, butrather they are “constitutive and enduring elements of virtue itself”[145, p. 66]. Each practice will be discussed in further detail below,in connection with relevant existing work in design. But first, thefollowing section presents an argument for adopting virtue ethicsas a guiding framework in interaction design.

3 WHY VIRTUE ETHICS IN HCIVirtue ethics is an approach to ethics with roots in ancient tradi-tions from several cultures dating back millennia [145]. Over thelast few centuries, it gradually fell out of favor, giving way to deon-tology (which resonated with the Judeo-Christian religions’ focuson laws) and later consequentialism (growing out of the Enlight-enment ideals of calculated rationality and universality) [84, 115].But over the past half-century, there has been a renewed interestin and development of virtue ethics [84, 115, 145]. This resurgencehas yet to meaningfully penetrate HCI, wherein deontology andconsequentialism are still the most broadly understood and appliedethical theories [31, 164]. Yet there are good reasons for HCI toengage with virtue ethics, which are discussed in this section.

Let us begin with a reflection on our sociotechnical situation.With today’s technologies, humanity has unprecedented power,including the ability to annihilate our own existence [82]. Ourefforts have affected our lived environment, as manifest in thechanging climate. Our institutions are capable of producing newsociotechnical platforms that drastically change our way of life ina matter of years (e.g., smartphones, social media). Even individu-als have further reach than ever before. These new technologies,which proliferate through the potency of network effects, intro-duce possibilities and dangers whose consequences are impossibleto predict (genetic modification with CRISPR/Cas9, proposals forgeo-engineering to curb climate change, social media policies inter-secting with global politics, etc.). Moreover, our environment is alsobecoming less predictable (fires, droughts, extreme storms). Givenall this, the future is less predictable than ever before [3, 145].

How does this bear on morality? Recall that ethics is the searchfor the good life. Life, of course, depends on the material qualitiesof our lived environment as well as our technologies. Most ethicaltraditions rely on the moral landscape of the future being more or

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

less as it is in the present, with technological change coming slowlyenough for our moral faculties to evolve alongside it [145]. Mill, forexample, in his introduction to utilitarianism, wrote that the theorywould work “as long as foresight is a human quality” [91, p. 25].

In this light, we can appreciate reasons deontology and conse-quentialism fall short as moral guides today—many of which havealready been noted by HCI scholars [46, 82, 92, 164]. Deontologyseeks to provide rules for moral action. It is, of course, impossi-ble to list an exhaustive set of rules, and situations arise whererules conflict. In any case, with such an opaque future, we cannotknow ahead of time what the moral rules should be [145]. Evensystems with one or a few rules, such as Kant’s categorical impera-tive (i.e., an action is good if we would want everyone to do it), failus, as they require us to know relevant facts about life in the futurewhile assessing each rule. For example, consider asking yourselfin 2005 whether it would be good for everyone to be on Facebook.Consequentialism, on the other hand, does not rely on specifiedrules, but it does require that we be able to know and predict theconsequences of our actions. At its worst, this lends itself to theperfectionism of endlessly seeking to optimize outcomes [145]. Butmoreover, given the vicissitudes of emerging technologies, we cer-tainly cannot predict the long-term consequences of our actions;and even in the short term, actions may have far-flung knock-oneffects due to the affordances of networked digital technologies[128].

In our sociotechnical environment, we cannot rely on fixed rulesof conduct (we need to knowwhen to bend the rules or even rewritethem) or on predicting outcomes. Virtue ethics, on the other hand,provides a balanced, dynamic and responsive framework for dis-cerning and moving toward the good life using tools and conceptsthat have been with humanity for millennia and which are still rel-evant, even given the tremendous change of the last few centuries[63, 145]. Vallor writes:

Virtue ethics is a uniquely attractive candidate forframing many of the broader normative implicationsof emerging technologies. . . Virtue ethics is ideallysuited for adaptation to the open-ended and varied en-counters with particular technologies that will shapethe human condition in this and coming centuries.Virtue ethical traditions privilege the spontaneity andflexibility of practical wisdom over rigid adherenceto fixed rules of conduct—a great advantage for thoseconfronting complex, novel, and constantly evolvingmoral challenges such as those generated by the dis-ruptive effects of new technologies. [145, p. 33]

4 PRIORWORK ON VIRTUE ETHICS IN HCIThough deontology and consequentialism are the most frequentlyapplied ethical theories within HCI, there are some precedents forapplying virtue ethics. While this paper is not meant to provide asystematic, comprehensive literature review of the subject, an illus-trative presentation of this work is in order. To explore the existingwork in HCI that deals with virtue ethics, a literature review wasconducted in July 2021. This review was targeted toward worksthat overtly mentioned or employed virtue ethics. This began witha full-text search in the ACM Digital Library for “virtue ethics” in

SIGCHI-sponsored venues, returning 17 results. To supplement this,searches were conducted for texts containing “virtue ethics” and“hci” in Google Scholar, IEEE Xplore and LISTA (Library and Infor-mation Science and Technology Abstracts). For relevant articles,citations were traced forward and backward using Google Scholar.While no date limitations were used in any of these searches, it isstriking that none of the resulting articles was from earlier than2001, and the vast majority were from 2019 onwards.

For the purposes of discussing the body of literature retrieved,a thematic analysis was conducted. Four themes characterize thisliterature: human–robot interaction; HCI methodology; design ed-ucation; and the design process. In the field of human–robot inter-action, virtue ethics has become a growing subject of discussionover the past few years. Some of this work explores the questionof robots having moral worth as beings [33, 133, 164], and otherliterature explores whether robots could be competent moral actors[71, 114, 152, 154]. Whereas most of this work discusses robotsin general, some of authors have focused on automated vehiclesin particular [47, 142]. Next, some scholars have applied virtueethics to HCI research methodology, including discussing the ethi-cal dimensions of visualization research [34] and consent in humansubjects research [137]. Third, virtue ethics has also entered theconversation regarding design education. Virtue ethics has formedthe philosophical grounding for classroom activities in design fic-tion [18], data science [125], and system analysis [38]. In a broadercontribution, Pierrakos et al. argue that ethics education for engi-neers should move from an emphasis on rule-following to one ofmoral character, i.e., virtue [112].

Finally, the bulk of the literature retrieved relates to the con-ceptual foundations of the design process, which is most relevantto the present paper. Some of this work simply mentions that de-ontology and consequentialism are the norm in design ethics butacknowledges virtue as a fruitful path to consider [31, 46, 82, 92].(This is also mentioned among the works in human–robot interac-tion [47, 154, 164].) Light et al. specifically mention virtue ethicsas a useful framework given the dynamic, intersecting crises thathumanity is facing at present [82]. Next, a few contributions pro-vide guidance for applying virtue ethics, including frameworks forvirtue-based engineering [13, 95] and interaction design [15]. Someauthors have enumerated particular virtues to be used in design[74, 135], and one contribution discusses how design fiction canoffer a path for reflecting on virtues in design [20]. More gran-ularly, Chivukula et al. [31] provide a listing of eight particulardesign techniques using virtue ethics as part of their survey ofethical design techniques. However, only two of these are explic-itly rooted in virtue ethics; four others resonate with but do notexplicitly mention virtue ethics; and two (Borning et al. and Zhou)seem better described as deontological or consequential rather thanvirtue-oriented.

Much of the work reviewed, unfortunately, suggests misunder-standings of virtue ethics. For example, van Berkel et al. write, “Theimplementation of virtues in AI applications is inherently chal-lenging, e.g. how would one go about quantifying a courageousalgorithm through a rule-based approach?” [146]. This seems to con-fuse virtue ethics with deontology, as virtue-oriented approachesare categorically not rule-based; moreover, virtue ethicists wouldnot suggest that we attempt to creat a “courageous algorithm,” but

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

Figure 1: How the seven practices of virtue cultivation [145](inner text) connect with existing literature in HCI (outertext)

rather that we help people practice courage. Besides such fundamen-tal errors, others have characterized virtue ethics as focused onlyon the solipsistic development of one’s character traits, overlookingthe sociality, context-sensitivity and dynamism of virtue ethics. Forexample, Chivukula et al. write that “traditional systems of profes-sional ethics . . . often fixate on consequentialist, deontological, orvirtue ethics [rather than] a pragmatist ethics that values both thedesigner and her character and the unique complexity of the designsituation” [30, p. 9]. But virtue ethics, properly understood, doesvalue the complexity of the situation as well as the actor’s character.More recently, Chivukula et al. have defined virtue ethics in designnarrowly as focusing on the designer’s character alone. Again, thisoverlooks the way virtue is embedded in moral communities, notjust individualistic actors. Such a perspective then misses the possi-bility that designers could create products that help users cultivatevirtues, not just that designers can cultivate their own virtues.

All in all, the review of prior work suggests that the field of HCIis in need of a more accurate and complete understanding of virtueethics, including how it is eminently compatible with emergingethical theories under discussion such as pragmatic ethics andcare ethics, as well as more guidance for applying virtue ethics toexisting frameworks. The previous sections may have already gonesome way toward helping the field achieve the former; the nextsection concerns the latter.

5 CONNECTING THE PRACTICES OF VIRTUECULTIVATION TO EXISTING RESEARCH INHCI

One aim of this paper is to demonstrate how virtue ethics can beapplied to HCI research and design. To that end, this section iden-tifies connections that already exist implicitly between HCI andvirtue ethics by using the seven practices of virtue cultivation out-lined above as an analytical framework. Through this framework,it is possible to discern and classify existing work in HCI that may

readily contribute to virtue-oriented design. This is summarized inFigure 1.

This section serves to help designers and researchers in manyareas of HCI to better appreciate how their work already may con-tribute to virtue cultivation. In many cases, designing to supportmorality requires only a slight shift of focus in existing researchprograms. Moreover, others interested in moral design can recog-nize and continue to organize the wealth of relevant existing workin HCI and build upon this strong but as yet disparate foundation.

The topics discussed in this section arose through years of re-search and teaching a range of topics in HCI and design as well asopen-ended literature searching. This section does not purport tobe exhaustive or systematic. Rather, it offers examples and illustra-tions of how virtue ethics may be applied in range of topics acrossHCI and design. The heuristic value of this section lies in drawingconnections and pointing to opportunities in research and design.

The following subsections are dedicated to the seven practicesof virtue cultivation described by Vallor [145]. In each section,the practice is first defined in the context of virtue ethics. Next,connections to existing HCI literature are described for illustration.Finally, opportunities are pointed out for further research to takethe existing work in a more virtue-oriented direction.

5.1 Moral HabituationAccording to virtue ethicists, we become virtuous primarily byacting morally, not by thinking or reading about morality [145].Moral habituation implies gradual improvement over time as westrive to do the right thing in each situation that confronts us. Italso involves justification; true moral habituation is not just rotedoing, but also understanding the reasons for that doing.Withmoralhabituation, we have a reason for acting, the action is valued as good,and we repeat that action, which shapes our ongoing emotionalstates. This progress leads us to experience joy in performing moralaction, which in turn further strengthens our commitment to doinggood [145].

Moral habituation relates to several threads of HCI literature. Thenotion of learning through action recalls the maxim that profession-als (not least designers) think through doing, which Donald Schöndescribed as reflection-in-action [122]. Users, too, learn throughdoing, which is encapsulated in the concept of learnability, a facetof usability [98]. As well, there is a rich literature in HCI on users’repeated use of systems in various contexts, framed with conceptssuch as habit [32, 77, 78, 113], routine [19, 107, 138, 143], and ritual[10, 88, 104, 110].

However, this prior work has conceptualized habituationamorally—that is, it does not entail an orientation toward rightaction. Still, the work on habit, routine and ritual in HCI couldeasily be a site for applied virtue ethics if the focus turns towardthe morally good and not just the desirable or profitable.

5.2 Relational UnderstandingVirtue ethics conceptualizes the human person as a fundamentallysocial being—one formed through a network of relationships. Vallorwrites, “In contrast, both Kantian and utilitarian ethics portray theself as an independent rational agent who can and ought to actautonomously, without relying on the external guidance of others”

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

[145, p. 76]. The practice of relational understanding is, first, thecontinual pursuit of a more precise and accurate view of our socialrelationships; and second, building the skill of using this understand-ing in prudential judgments in response to the moral exigencies ofeach particular relationship in each particular circumstance [145].

In HCI, there has been a sizable literature examining humansocial roles in various sociotechnical contexts. Previous workhas examined social roles in family life [5, 12, 73], the work-place [14] and online support communities [159], and researchershave investigated gender roles regarding emerging technologies[106, 134, 160]. Research in this area has been marshaled toward theautomated recognition of social roles, for example in the workplace[7, 29, 76, 155], in teaching [65], and across contexts [6, 7, 105]. Ad-ditionally, work has been applied to the development of automatedsystems such as chatbots [136, 148] and social robots [51, 65, 66].

Just as with prior work related to moral habituation, the researchsurveyed in this section has been done without any particular moralintent. The existing work could be deepened by foregrounding theparticularly moral dimensions of roles in sociotechnical contexts,and future work can be channeled toward virtue-based design. Forexample, technologies that can discover social roles in context couldshow these findings to the user, offering them an opportunity forself-reflection and allowing them to consider the moral relevance ofthese roles (e.g., what duties and opportunities a given relationshipmight suggest) and to adopt new roles as prudent.

5.3 Reflective Self-ExaminationThird, virtue ethics contends that moral principles are not one-size-fits-all. Rather, any moral decision must take into accountour personal strengths and weaknesses. Assessing these accuratelyrequires a habit of reflective self-examination. The goal of suchexamination is to discern how well we are conforming to the idealmoral self we aspire to be—taking into account our beliefs, thoughts,feelings and actions. Reflective self-examination engenders withinus a sense of responsibility to correct our shortcomings and stirsup a sense of joy in doing so [145].

The practice of reflective self-examination resonates with philo-sophical discussions of technologies of the self. Foucault, who orig-inated the term, describes for example how self-care was enactedin Ancient Greece and Rome through sociotechnical practices suchas keeping personal notebooks and writing letters [44]. In HCI,related work is broadly part of the Slow Technology movement[57, 60]. Within this movement, many have examined how tech-nology can aid people in self-reflection [17, 42, 49, 94, 102]. HCIscholars have shown how technology can support self-reflection inseveral ways: by providing reasons to reflect [42, 119]; by providingthe conditions for reflection [17, 94, 119]; by supporting reflectionat different levels of depth [42]; and by guiding people through thestages of reflection [94, 119].

In what is now a familiar refrain, most of this work has proceededwithout reference to an external set of values. In this literature,reflection is taken to be a personal matter, for instance a pathto greater pleasure or self-improvement. However, some recentwork has tied reflection explicitly to values and ethics, such asexploring how reflection on our values may aid in combating onlinemisinformation [8]. Future work could explore how technology can

encourage and assist people in reflecting specifically on their moralstrengths and weaknesses, their virtuous development, etc.

5.4 Self-DirectionAs humans, our actions are generally goal-directed. These goalsmay be big or small, long-term or short-term, implicit or explicit.However, we seldom consider why we have particular goals orwonder if we should have different ones, especially in the moralrealm. But ethics is about intention and justification as much asaction; and virtue ethics in particular asks us to generate a visionof our ideal future moral self—as Vallor writes, “the type of beingthat is worthy of my becoming in this world” [145, p. 97]—to providedirection to our efforts.Without such insight, all the moral intentionand action in the world may be fruitlessly misdirected. Indeed,virtuous development is said to be motivated in the first place byour observing moral exemplars in the community—those people weadmire for their morality and thus seek to emulate [see also 162].The practice of self-direction entails courageous movement downthe path we have chosen for our life toward realizing our idealfuture self. Because this is difficult, Vallor says that we must followthis path not for social gain but out of a true desire to cultivaterighteousness for its own sake [145].

Work related to goals has a long history in HCI. An early andstill applicable theoretical model in HCI is GOMS (goals, opera-tions, methods and selection rules), which posits that people selectand carrying out particular methods meant to achieve their goalsand provides methods for evaluating and optimizing systems withrespect to users’ goals [28]. A sizable literature engages with goal-setting theory from psychology to examine how technology cansupport goal achievement; this work has been applied to areassuch as education [16, 85, 109], gamification [144], fitness tracking[75, 97, 99], energy consumption [123], and overcoming challengesrelated to disability [48, 87]. Another line of HCI research hassought to develop automated methods for discerning and modelingusers’ goals [4, 132, 163] to create adaptive user interfaces that helppeople achieve their goals [40, 79, 156]. In this vein, recent workhas explored how AI systems can suggest goals to users and allowusers to reflect on those goals [93].

HCI research on goals has contributed to our understanding ofhow people’s goals form and how goal setting and achievementcan be assisted with technology. From the perspective of virtueethics, however, this work has some limitations. The literature onGOMS, for instance, has taken people’s goals at face value, seekingto help people achieve their goals rather than helping them choosefrom among a set of goals. GOMS is also tuned toward quantitativeevaluation of usability in specialized settings, rather than broaderquestions of user experience and morality “in the wild.” The workrelated to goal-setting theory and goal modeling does attend toselecting and defining goals, but this has been done with respect togoal specificity and difficulty, not any moral dimensions. Furtherresearch applying virtue ethics to goal-setting and goal-orienteduser interfaces may be able to use similar methods to assess howto support people in identifying and achieving goals related tothe cultivation of virtues; moreover, it may result that there areadditional dimensions of goals relevant to morality that shouldbe considered, besides simply difficulty and specificity. As well,

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

future research in this area may examine the identification of moralexemplars among designers and users, as these are at the root ofself-direction.

5.5 Moral AttentionNext, becoming virtuous requires that we understand when moralaction is called for. To do so, we must sensitize ourselves to thefeatures around us that are morally relevant, which Vallor callsmoral attention. At our best, we will be able to say not only thatsome feature is morally salient, but also why it is and what todo about it. Vallor offers the simple example of a hungry coyotestalking toward a defenseless toddler. Observing this, any of uswould understand the danger for the child and be stirred into action[145]. But most situations are not so clear-cut. If we encounter aperson on the street studying a map, we read their body languageand facial expressions to determine if they need help, or if they’rejust planning. If we see a bedraggled person watching us from a citybench, do we avert our eyes, cross the street, offer them some sparecash, bring them a new set of clothes, or invite them into our homefor dinner and a wash? There are any number of subtle featuresof the situation that help us make this decision. Indeed we maymake a decision that upon reflection we realize was the wrong one—hence the need for reflective self-examination. More difficult still arethose encounters and situations that are protracted and dynamic:how best to act during an emerging novel disease pandemic, forexample? Cultivating moral attention gradually helps us discernwhat each particular moral situation calls us to do; sometimes thisinvolves adapting or flouting social conventions or normative rulesthat otherwise would apply. In this way, we can respond flexibly tochanging situations.

In HCI, there is a long tradition of research on human perception,particularly for the design of effective displays [153], and on atten-tion, especially in complex sociotechnical environments amidst thethreat of interruption [11, 117]. This work tends to be pedestrian,oriented toward making sure technologies are well-suited to thehuman organism. For example, a large literature is dedicated to hu-man attention and distraction while driving to increase road safety[e.g., 121]. But the practice of moral attention calls for a deepersense of perception and attention, one that integrates human expe-rience amidst a dynamic environment with moral interpretation.There has been some work directly in this vein, looking for exampleat students’ and engineers’ ethical awareness in decision making[23, 52]. There is also a vein of work in HCI on cultivating mindful-ness, which is generally defined as a form of attentional training[1, 36, 120, 131, 141]. There has also been work in HCI on the moralfeatures of our digital environments, equipping us with conceptssuch as dark patterns [55, 86], ghost work [56], etc.; having suchconcepts is a vital step in training our moral attention.

While much of the work in HCI on attention and perception isnot focused on morality, there is an emerging literature that has thecapacity to connect to moral attention. Existing work on humanattention, distraction, interruption, and so on can be channeledto create technologies that do not completely consume people’sattention, allowing them autonomy and flexibility in where theyattend to and therefore to cultivate their moral attention. Future

work may continue to identify ethical concepts and moral dimen-sions of existing user interfaces and sociotechnical infrastructure,foster public discussion around them, and equip people with moralactions they can take in response.

5.6 Prudential JudgmentCentral to ethics is the question of making decisions and takingaction, and likewise Vallor argues that this list of practices wouldbe incomplete without prudential judgment. This is “the ability tochoose well, in particular situations, among the most appropriateand effective means available for achieving a noble or good end”[145, p. 105], and having such an ability is necessary for acting inunanticipated or rapidly evolving situations. Prudential judgmentinvolves interpreting our circumstances and emotions correctly,understanding our options and their possible consequences, andchoosing the best among them.

One way in which prudential judgment could be applied to HCIis in the design of interactive systems to help users make bettermoral judgments. Indeed, since their inception, computers havebeen used to support human decision-making, whether by provid-ing data, calculations or models. An early theoretical vision for thiswas J. C. R. Licklider’s 1960 description of “man–computer symbio-sis” that would enable people to make better-informed decisionsmore quickly and easily [81]. As such, for the entire history of HCI,researchers have contributed to the development of decision sup-port systems, most notably in business management [21, 89]. Today,the development of decision support systems still represents a ma-jor line of work in HCI, particularly in healthcare [118, 161]. Morerecently, this sort of work has been conceptualized as choice archi-tecture [68], which is a matter of “organizing the context in whichpeople make decisions” [140, p. 3], often including subtly nudgingusers toward desirable choices. HCI researchers in this area havesought to operationalize the psychology of human decision-makingby creating technologies to help people make judgments [68]. Thisresearch area is still nascent, but ongoing research provides meth-ods to support designers in choice architecture [26, 27, 68] andexplores how these ideas could be integrated into AI recommendersystems [69].

Some may criticize nudging as inherently immoral on thegrounds that it works through underhanded manipulation. How-ever, there are other ways to accomplish nudging, and the majorityof nudges in the HCI literature stimulate conscious reflection (ratherthan unconscious response) through transparent (rather than black-box) mechanisms [26]. Such systems may be more morally defen-sible, particularly when they nudge us to carry out actions thatwe have already committed to wanting to carry out, such as com-muting by bike more often than car. While decisions around self-improvement and productivity may be morally relevant, most workon nudging in HCI is not explicitly oriented toward moral out-comes. Future work in HCI could move in that direction. Indeed,there is a growing literature outside of HCI on the ethics of nudging[e.g., 22, 83, 124], and future work in HCI could incorporate thisdiscourse into the design and development of novel, virtue-orientedsystems. Additionally, morally-relevant nudging has also started tobe incorporated in public platforms outside academia—for instanceInstagram experimenting with hiding like counts [108] and Twitter

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

encouraging users to read articles before retweeting links [149]—and HCI researchers can contribute by measuring the effectivenessof such efforts and discover new opportunities in this direction.

Another possible application of prudential judgment in HCIbears mentioning: that of helping designers make better ethicaldecisions throughout the process. Related to this, some work in HCIhas investigated how design practitioners reason through ethicaldilemmas in their work. This includes some of the work mentionedin the previous section on ethical awareness [23, 52] as well asother studies engaging with moral judgment [53, 54]. Further workin this vein could provide a foundation for educating designers onmoral judgment as part of helping designers cultivate virtue.

5.7 Extension of Moral ConcernThe final practice of cultivating virtue is extending a caring attitudebeyond its initial scope. If the initial scope of our morality is to carefor ourselves and our family, this practice asks us to care also forour neighbors, our fellow citizens, our fellow humans, our fellowcreatures, and perhaps beyond [see also 43, 130]. This must be doneappropriately, rather than indiscriminately—to the right degree, atthe right time, to the right entities and in the right way [145]. Vallorcontends that this is the most powerful practice of these seven, andalso the most challenging.

An interesting parallel to the extension of moral concern is foundin the extension of disciplinary concern of HCI over the course ofits history. In a famous 1990 paper, Grudin described the historicaltrajectory of user interface design as “reaching out” from the com-puter hardware to further and vaster realms of the human socialworld [58]. Similarly, early visions of ubiquitous computing [151]presaged our modern day, when digital technologies are woveninto the fabric of everyday life. These changes resonate with theparadigm shifts in HCI research and theory more broadly towarddynamic social contexts [62]. As well, HCI has expanded from itshistorical interest in making systems more usable to appealing todeeper human values, such as justice, equality, privacy, meaningand more [59, 90, 147, 158]. Perhaps inevitably, then, there is someprecedent in HCI for the extension of specifically moral concern,such as in considering the needs and welfare of not only end usersbut a broader set of direct and indirect stakeholders, which is a coretenet of value sensitive design (VSD) [45].

Though VSD and similar design approaches are gaining traction,there is still room for the extension of moral concern to proliferateamong designers and developers—particularly as society continuesto grapple with the unexpected consequences of new technologies.To this end, Vallor poses a number of questions:

Are certain kinds of technology expanding or nar-rowing the scope of our moral concern, or do theyexert no influence on this aspect of moral practice?Might some emerging technologies change how weexpress our moral concern for others? Might they al-low some forms of care, compassion, and civility tobe expressed more easily than others? Could someemerging technologies exacerbate moral tribalism, ne-glect, or incivility, shrinking the circle of our moralconcern for others? Might other technologies havethe opposite effect, encouraging the exercise of moral

imagination and perspective-taking that enrich ourcapacities for moral extension? How can we drivemore resources into development of the latter, ratherthan the former? What is the potential risk to humanflourishing if we can’t, or won’t? [145, p. 117]

6 CRITICISMS OF VIRTUE ETHICSWhen considering any position or theory, it is indispensable toengage with arguments both for and against. As a millennia-oldtradition, virtue ethics has seen a number of critiques. This sec-tion discusses three central critiques relevant to HCI—situationism,scalability and individualism—along with responses from virtueethicists.

6.1 The Situationist CritiqueVirtue ethics posits that we have virtues and vices that manifestin character, and that these qualities are dynamically stable. Forexample, an honest person will tend to tell the truth. Starting in the1960s, some psychologists began to cast doubt on whether people in-deed have such qualities, suggesting instead that we respond muchmore locally to each particular situation—hence this became knownas the “situationist” critique. For example, situationists would saythere’s no such thing as an honest person; a person simply lies ortells the truth in response to the pressures and opportunities ofeach particular situation. Situationists argue that the empirical datasupports situationism and refutes both virtue ethics and personalitypsychology [e.g., 61].

However, the empirical claims of situationists have them-selves been called into question. First, these claims rely on over-generalizations that are not supported by the findings that situa-tionists cite [157, ch. 4]. Moreover, the psychological studies thatformed the basis of the situationist critique, such as those by Mil-gram and Zimbardo, have recently been revealed as dubious, if notfraudulent [24, 80, 129]. Next, more recent research demonstratesthat even though there is situational variation in people’s actions,there are stable qualities that emerge over longer periods of time,just as virtue ethics would predict [145, 157]. Indeed, in a recentreview, Wright et al. emphasize that even perfectly virtuous peopleneed not be infallible, as humans are by definition limited (we runout of time, suffer migraines, experience major challenges, etc.)[157, p. 17]. Considering all this, Wright et al. offer a model ofvirtue that accounts for both the dynamism and stability of humanbehavior [157].

6.2 The Scalability CritiqueNext, critics have observed that virtue ethics emerged in antiquity,when the world was much different than it is today—societies weresmaller, simpler, more closed, and more homogenous—and havesuggested on these grounds that virtue ethics may no longer beactionable or useful. These critics, such as Luciano Floridi, suggestthat virtue ethics doesn’t scale [43]. For instance, virtue ethics claimsthat morality can spread from particular individuals to families,communities and societies through the bonds of learning and caring.Regarding this claim, critics may be concerned that though learningabout the virtues may help any given individual, its mechanismsare not sufficient to spread virtue among communities and societies

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

in today’s world, given the unpredictable emergent properties ofcomplex and open societies [43].

This critique seems to overlook, first and foremost, the realitythat human culture does spread, even in today’s complex and openworld (perhaps especially so). Consider creative works (Harry Pot-ter, Pokémon, artistic styles) and social movements (animal rights,free software, Black Lives Matter). If other cultural componentscan scale and spread, why should morality be uniquely unscalable?Indeed, Vallor makes an extended case that the openness and com-plexity of today’s world are precisely why virtue ethics is the bestchoice for guiding technology design compared to other ethicaltheories [145].

As an outgrowth of the scalability critique, applied specifically todigital technology, we might make the observation that technologydesign and development today increasingly takes place at a fewlarge companies within a corporate shareholder model, prioritizingshort-term financial returns over the long-term social good. Observ-ing this, adherents to the scalability critique of virtue ethics wouldsuggest that cultivating virtue even among communities of con-sumers is not enough to ensure that emerging technologies will beconducive to virtue—that large-scale systemic change is necessary.Again, this critique may overlook the power of consumer choicein directing corporations’ product development as we “vote withour wallets” and advocate publicly. The proliferation of organicfood and electric automobiles may serve as examples where thishas been successful; purchasing power among a powerful few con-sumers was followed by government incentive and regulation aswell as further corporate innovation in an upward spiral. Moreover,business leaders such as Jacqueline Novogratz provide a model forcorporations that can make positive moral changes in the worldwhile also being competitive and profitable in the market [101].The rise of “B Corporations,” which are certified for social andenvironmental performance, is also relevant along these lines [72].

Related to the scalability critique is a concern about relativism.Given that virtue ethics has been developed in various civilizationsand epochs, some touting different virtues as central, it may beimpossible (or at least unjust) to expect all the world to subscribeto one strand of virtue ethics [43, p. 167]. In response to this cri-tique, Charles Ess has offered the analogy to technical standards. Aglobal internet was only possible with shared protocols and otherstandards, even though local variation remains; in the same way,Ess argues, a globally interconnected humanity needs shared moralnorms, local variation notwithstanding [39]. Still, somemaywonderif this is possible. On this point, Vallor writes:

Why should we think that [the global cultivation ofshared virtues] is practically possible? Consider thatthe alternatives are to surrender any hope for con-tinued human flourishing, to place all our hope inan extended string of dumb cosmic luck, or to prayfor a divine salvation that we can do nothing to earn.. . . Moreover, if Aristotle had even some success infostering the cultivation of civic virtues, if Buddhismhas encouraged any more compassion and tolerance,or Confucianism more filial care and loyalty, what isto prevent a new tradition from emerging around the

technomoral virtues needed for human flourishingtoday? [145, pp. 56–57]

6.3 The Individualist CritiqueFinally, because virtue ethics begins with the cultivation and ac-tivity of individuals, it may not seem appropriate as a global ethicwhen the most pressing moral issues are more collective than indi-vidual. Floridi suggests that the “genuine vocation” of virtue ethicsis the individual, and it cannot readily apply to societies (but see theprevious subsection); moreover, he suggests it is too Euro-centric tobe appropriate for the whole world. Floridi worries that “if misap-plied,” virtue ethics will foster narrow individualism, whereas whatthe world needs is broader moral cooperation and collaboration[43, p. 167].

First, Hans Jonas [70] has noted that this critique stands for manyethical theories developed before the 21st century; such theorieswere meant to address moral questions that individuals face, notcollectives, and in circumstances when that person’s actions wouldunfold within a foreseeable, stable present. More deeply, it seemsthat this critique is aimed at a straw-man version of virtue ethics.It is not the case that virtue ethics is narrowly concerned with asolipsistic individual, nor that it ignores systemic, situational orenvironmental effects [84]. As Vallor writes, “virtue ethics treatspersons not as atomistic individuals confronting narrowly circum-scribed choices, but as beings whose actions are always informedby a particular social context of concrete roles, relationships, andresponsibilities to others” [145, p. 33]. According to virtue ethics,we cannot cultivate virtue on our own; it takes a whole community,including parents, other family members, teachers, and friends—aswell as designers, business and civic leaders, etc. Virtue ethics pointsto public morality just as much as to private morality, as “a civil per-son will have a strong interest in supporting and maintaining herfellow citizens’ capacities for excellence in public deliberation andaction” [145, p. 143]. Critics such as Floridi ignore that virtue ethicsflourished not only in ancient Greece; Confucian and Buddhist for-mulations of virtue ethics were always community-oriented. EvenAristotle, at the root of Western liberalism, described a virtue ofcivic friendship [9]. As described in response to the scalability cri-tique, virtue ethics is as much about infrastructure as it is aboutindividuals’ character qualities.

7 DISCUSSION: TOWARD VIRTUE-ORIENTEDRESEARCH AND DESIGN

This paper has provided a guide to virtue ethics for HCI researchand design. In particular, it detailed the seven practices of virtue cul-tivation as synthesized by Vallor [145]. The review of existing HCIwork using virtue ethics showed several lacunae in the field’s un-derstanding of virtue ethics, and the review of HCI work pertainingto the seven virtue-building practices revealed many opportunitiesfor further integration of virtue ethics in HCI research and design.This section reflects on those opportunities.

As we have seen, there is much existing work in HCI that relatesto the seven practices of virtue cultivation, but mostly it has beendeveloped amorally (for example, in terms of habituation ratherthan specificallymoral habituation). Future work in these areas can

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

readily contribute to ethical design by orienting toward the virtues(for example, helping people build habits of honesty or courage).

According to virtue ethics, these practices can be fruitfully de-veloped one at a time, but it may be even more powerful to engageseveral of them in tandem, as the practices are intertwined andmutually supporting [145]. This suggests that there are unrealizedwebs of connection between seemingly disparate areas of work inHCI, such as between choice architecture, the social roles of robots,and reflective informatics. The strongest virtue-oriented futurework would bring together these research areas toward the pursuitof the good, both in terms of understanding existing systems anddesigning new ones.

Regarding design, Vallor’s own work in Technology and theVirtues demonstrates how virtue ethics can play an indispensablerole in the design of digital technologies. The final chapters of thebook present several case studies on how emerging technologiessuch as social media platforms and drones could be further de-veloped toward the good (by attending to the virtues) or not (byignoring them, or perhaps by attending to vices). As Vallor shows,such case studies may spark critical and creative thinking in de-signers. For example, she asks how social media platforms mightcultivate a shared respect for truth beyond “just handing everybodya louder microphone” [145, p. 179], and she compels us to considerthe deep emotional and educational roles of caring and being caredfor that may be overlooked by creators of care robots [145, p. 223].Amidst these case studies, Vallor offers a conceptual methodologyfor ensuring designed systems are conducive to human virtue.

This points to an opportunity to further integrate existing frame-works of ethical design in HCI with virtue ethics. There alreadyexist a number of frameworks and theoretical approaches that offera way to consider various stakeholders’ moral values in the designprocess. VSD, mentioned above, is perhaps the most well knownamong these. Other examples include Helen Nissenbaum’s on val-ues in design [100] and Katie Shilton’s on value conflicts [126].The argument made in this paper would suggest that virtue ethicscan supplement and perhaps further these perspectives; the workof Reijers and Gordijn on “virtuous practice design” provides oneexample to this end [116]. However, a full analysis of this questionmust be left for future work.

Additionally, there are empirical opportunities for moving HCIwork in a moral direction by engaging with virtue ethics. First, todetermine whether and how well interactive systems help userscultivate virtue, researchers could conduct longitudinal studiesusing psychological measurements of virtue [see 157] before andafter a person’s engagement with the system or a prototype. Next,regarding the challenge of virtuous technology within the corporatebusiness structure discussed above, future research should explorehow designers in industry engage (or not) with virtues and/orvalues as part of their work, and how these engagements relate toproduct development. To be sure, there is some existing work alongthese lines [e.g., 53], if not specifically framed with virtue ethics.Future research may explore how real-world designers respondto and apply the framework of virtue cultivation presented here,and other research may contribute to the development of specificdesign tools tuned to virtue development [see 31].

8 CONCLUSIONThough the ethics of technology is becoming something of a trendytopic today, questions of morality have been with us since the dawnof computing. As Norman Cousins observed, in an essay from 1966:“The question persists and indeed grows whether the computermakes it easier or harder for human beings to know who theyreally are, to identify their real problems, to respond more fully tobeauty, to place adequate value on life, and to make their worldsafer than it now is” [35]. That essay has been reprinted numeroustimes, showing its perennial relevance.

As we continue to seek moral guidance for digital technology,this paper suggests that we not overlook virtue ethics. Thoughvirtue ethics has been largely ignored and sometimes misunder-stood within HCI, this paper has shown how it is eminently relevantand readily applicable to HCI research and design.

In closing, we might wonder why to date virtue ethics has beenignored within HCI—why so many scholars seem to be unaware ofit, or immediately discard it as an option. Perhaps this is becausevirtue is difficult to operationalize in strictly technical terms, and inour technocratic age we tend to prefer purely technical solutions,as T. S. Eliot’s lines in the epigraph of this paper suggest. For betteror worse, virtue is tuned toward the human, rather than just thetechnical. This is a constraint of reality, if one that some wouldprefer to ignore. It means that, just as Light and Akama observe thattechnology cannot be designed “for mindfulness” [1], we cannothope to design technologies that automatically create virtue.

But this is no reason to discard virtue ethics. On the contrary, asVallor contends, “important as they are, the engineers of the 21stcentury who fashion code for machines are not as critical to thehuman mission as those who must fashion, test, and disseminatetechnomoral code for humans—new habits and practices for livingwell with emerging technologies” [145, p. 254]. In applying virtueethics to HCI, then, we must remember that we aren’t designing“for virtue” but rather designing to support the human practicesof cultivating virtue, from moral habituation to extending moralconcern. Design to support human virtue may be exactly what isneeded to start off a flywheel of morality.

ACKNOWLEDGMENTSThe title of this paper is an homage to Vallor’s book Technology andthe Virtues: A Philosophical Guide to a Future Worth Wanting [145],which serves as a foundation and inspiration for the present work.I am grateful to anonymous comments on a draft of this paper forguidance in improving it.

REFERENCES[1] Yoko Akama and Ann Light. 2015. Towards Mindfulness: Between a Detour and

a Portal. In Proceedings of the 33rd Annual ACM Conference Extended Abstractson Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI EA ’15).ACM, New York, NY, USA, 625–637. https://doi.org/10.1145/2702613.2732498

[2] Ali Alkhatib, Michael S. Bernstein, and Margaret Levi. 2017. Examining CrowdWork and Gig Work Through The Historical Lens of Piecework. Association forComputing Machinery, New York, NY, USA, 4599–4616. https://doi-org/10.1145/3025453.3025974

[3] Brad Allenby and Daniel Sarewitz. 2011. The Techno-Human Condition. MITPress, Cambridge, MA.

[4] Alaa Alslaity and Thomas Tran. 2021. Goal Modeling-Based Evaluation forPersonalized Recommendation Systems. In Adjunct Proceedings of the 29th ACMConference on User Modeling, Adaptation and Personalization (Utrecht, Nether-lands) (UMAP ’21). Association for Computing Machinery, New York, NY, USA,

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

276–283. https://doi.org/10.1145/3450614.3464619[5] Tawfiq Ammari and Sarita Schoenebeck. 2016. “Thanks for Your Interest in Our

Facebook Group, but It’s Only for Dads”: Social Roles of Stay-at-Home Dads. InProceedings of the 19th ACMConference on Computer-Supported CooperativeWork& Social Computing (San Francisco, California, USA) (CSCW ’16). Associationfor Computing Machinery, New York, NY, USA, 1363–1375. https://doi.org/10.1145/2818048.2819927

[6] Christoph Anderson, Judith S. Heinisch, Sandra Ohly, Klaus David, and VeljkoPejovic. 2019. The Impact of Private and Work-Related Smartphone Usage onInterruptibility. In Adjunct Proceedings of the 2019 ACM International Joint Con-ference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACMInternational Symposium on Wearable Computers (London, United Kingdom)(UbiComp/ISWC ’19 Adjunct). Association for Computing Machinery, New York,NY, USA, 1058–1063. https://doi.org/10.1145/3341162.3344845

[7] Christoph Anderson, Clara Heissler, Sandra Ohly, and Klaus David. 2016. As-sessment of Social Roles for Interruption Management: A New Concept in theField of Interruptibility. In Proceedings of the 2016 ACM International Joint Con-ference on Pervasive and Ubiquitous Computing: Adjunct (Heidelberg, Germany)(UbiComp ’16). Association for Computing Machinery, New York, NY, USA,1530–1535. https://doi.org/10.1145/2968219.2968544

[8] Ahmer Arif. 2018. Designing to Support Reflection on Values & Practices toAddress Online Disinformation. In Companion of the 2018 ACM Conference onComputer Supported Cooperative Work and Social Computing (Jersey City, NJ,USA) (CSCW ’18). Association for Computing Machinery, New York, NY, USA,61–64. https://doi.org/10.1145/3272973.3272974

[9] Aristotle. 2014. Nicomachean Ethics. Cambridge University Press, Cambridge,UK.

[10] Hanif Baharin, Fazillah Mohmad Kamal, Nor Laila Md Nor, and Nurul AmirahAhmad. 2016. Kin circle: Social messaging system for mediated familial bonding:Designing from the lens of interaction ritual chains theory. 2016 4th InternationalConference on User Science and Engineering (i-USEr) (Aug. 2016), 85–90.

[11] Saskia Bakker, Doris Hausen, and Ted Selker. 2016. Peripheral Interaction:Challenges and Opportunities for HCI in the Periphery of Attention. Springer.

[12] Madeline Balaam, Judy Robertson, Geraldine Fitzpatrick, Rebecca Say, GillianHayes, Melissa Mazmanian, and Belinda Parmar. 2013. Motherhood and HCI.In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (Paris,France) (CHI EA ’13). Association for Computing Machinery, New York, NY,USA, 3215–3218. https://doi.org/10.1145/2468356.2479650

[13] Lee Barford. 2019. Contemporary Virtue Ethics and the Engineers of Au-tonomous Systems. In 2019 IEEE International Symposium on Technology andSociety (ISTAS). 1–7. https://doi.org/10.1109/ISTAS48451.2019.8937855

[14] Jordan B. Barlow. 2013. Emergent Roles in Decision-Making Tasks Using GroupChat. In Proceedings of the 2013 Conference on Computer Supported CooperativeWork (San Antonio, Texas, USA) (CSCW ’13). Association for ComputingMachin-ery, New York, NY, USA, 1505–1514. https://doi.org/10.1145/2441776.2441948

[15] Marguerite Barry, Kevin Doherty, Jose Marcano Belisario, Josip Car, CecilyMorrison, and Gavin Doherty. 2017. MHealth for Maternal Mental Health:Everyday Wisdom in Ethical Design. In Proceedings of the 2017 CHI Conferenceon Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17).Association for Computing Machinery, New York, NY, USA, 2708–2756. https://doi.org/10.1145/3025453.3025918

[16] Prateek Basavaraj and Ivan Garibay. 2018. A Personalized "Course Navigator"Based on Students’ Goal Orientation. In Proceedings of the 2018 ACM Conferenceon Supporting Groupwork (Sanibel Island, Florida, USA) (GROUP ’18). Associationfor Computing Machinery, New York, NY, USA, 98–101. https://doi.org/10.1145/3148330.3154508

[17] Eric P.S. Baumer. 2015. Reflective Informatics: Conceptual Dimensions forDesigning Technologies of Reflection. In Proceedings of the 33rd Annual ACMConference on Human Factors in Computing Systems (Seoul, Republic of Korea)(CHI ’15). ACM, New York, NY, USA, 585–594. https://doi.org/10.1145/2702123.2702234

[18] Eric P.S. Baumer, Timothy Berrill, Sarah C. Botwinick, Jonathan L. Gonzales,Kevin Ho, Allison Kundrik, Luke Kwon, Tim LaRowe, Chanh P. Nguyen, FredyRamirez, Peter Schaedler, William Ulrich, AmberWallace, YuchenWan, and Ben-jamin Weinfeld. 2018. What Would <i>You</i> Do? Design Fiction and Ethics.In Proceedings of the 2018 ACM Conference on Supporting Groupwork (SanibelIsland, Florida, USA) (GROUP ’18). Association for Computing Machinery, NewYork, NY, USA, 244–256. https://doi.org/10.1145/3148330.3149405

[19] James "Bo" Begole, John C. Tang, Randall B. Smith, and Nicole Yankelovich. 2002.Work Rhythms: Analyzing Visualizations of Awareness Histories of DistributedGroups. In Proceedings of the 2002 ACM Conference on Computer SupportedCooperative Work (New Orleans, Louisiana, USA) (CSCW ’02). ACM, New York,NY, USA, 334–343. https://doi.org/10.1145/587078.587125

[20] Annelie Berner, Monica Seyfried, Calle Nordenskjöld, Peter Kuhberg, and IrinaShklovski. 2019. Bear & Co: Simulating Value Conflicts in IoT Development.In Extended Abstracts of the 2019 CHI Conference on Human Factors in Comput-ing Systems (Glasgow, Scotland Uk) (CHI EA ’19). Association for ComputingMachinery, New York, NY, USA, 1–4. https://doi.org/10.1145/3290607.3313271

[21] Robert W Blanning. 1979. The functions of a decision support system. Informa-tion & management 2, 3 (1979), 87–93.

[22] Luc Bovens. 2009. The ethics of nudge. In Preference change. Springer, 207–219.[23] Karen L. Boyd. 2021. Datasheets for Datasets Help ML Engineers Notice and

Understand Ethical Issues in Training Data. Proc. ACM Hum.-Comput. Interact.5, CSCW2, Article 438 (oct 2021), 27 pages. https://doi.org/10.1145/3479582

[24] Rutger Bregman. 2020. Humankind: A hopeful history. Little, Brown and Com-pany, New York, NY.

[25] Philip Brey. 2015. Design for the value of human well-being. In Handbook ofEthics, Values, and Technological Design: Sources, Theory, Values and ApplicationDomains, Jeroen van den Hoven, Pieter E. Vermaas, and Ibo van de Poel (Eds.).Springer, Dordrecht, Netherlands, 365–382. https://doi.org/10.1007/978-94-007-6970-0_14

[26] Ana Caraban, Evangelos Karapanos, Daniel Gonçalves, and Pedro Campos.2019. 23 Ways to Nudge: A Review of Technology-Mediated Nudging in Human-Computer Interaction. Association for Computing Machinery, New York, NY,USA, 1–15. https://doi.org/10.1145/3290605.3300733

[27] Ana Caraban, Loukas Konstantinou, and Evangelos Karapanos. 2020. The NudgeDeck: A Design Support Tool for Technology-Mediated Nudging. Association forComputing Machinery, New York, NY, USA, 395–406. https://doi.org/10.1145/3357236.3395485

[28] S Card, T Moran, and A Newell. 1983. The Psychology of Human–ComputerInteraction. Lawrence Erlbaum Associates, Hillsdale, NJ.

[29] Shruti Chandra, Raul Paradeda, Hang Yin, Pierre Dillenbourg, Rui Prada, andAna Paiva. 2018. Do Children Perceive Whether a Robotic Peer is Learning orNot?. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (Chicago, IL, USA) (HRI ’18). Association for Computing Ma-chinery, New York, NY, USA, 41–49. https://doi.org/10.1145/3171221.3171274

[30] Shruthi Sai Chivukula, Colin M. Gray, and Jason A. Brier. 2019. Analyzing ValueDiscovery in Design Decisions Through Ethicography. Association for ComputingMachinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300307

[31] Shruthi Sai Chivukula, Ziqing Li, Anne C. Pivonka, Jingning Chen, and Colin M.Gray. 2021. Surveying the Landscape of Ethics-Focused Design Methods.arXiv:2102.08909 [cs.HC]

[32] Kenny K. N. Chow. 2019. Designing Representations of Behavioral Data withBlended Causality: An Approach to Interventions for Lifestyle Habits. In Persua-sive Technology: Development of Persuasive and Behavior Change Support Systems,Harri Oinas-Kukkonen, Khin Than Win, Evangelos Karapanos, Pasi Karppinen,and Eleni Kyza (Eds.). Springer International Publishing, Cham, 52–64.

[33] Mark Coeckelbergh. 2021. How to use virtue ethics for thinking about the moralstanding of social robots: a relational interpretation in terms of practices, habits,and performance. International Journal of Social Robotics 13, 1 (2021), 31–40.

[34] Michael Correll. 2019. Ethical Dimensions of Visualization Research. Associationfor Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300418

[35] Norman Cousins. 1989. The poet and the computer. National Forum 69(2021/7/25/ 1989), 6+. https://link.gale.com/apps/doc/A7539739/AONE?u=anon~3a3b2d13&sid=googleScholar&xid=867ca486 2.

[36] Claudia Daudén Roquet and Corina Sas. 2018. Evaluating Mindfulness Medita-tion Apps. In Extended Abstracts of the 2018 CHI Conference on Human Factorsin Computing Systems (Montreal QC, Canada) (CHI EA ’18). ACM, New York,NY, USA, Article LBW575, 6 pages. https://doi.org/10.1145/3170427.3188616

[37] Pieter Desmet and Marc Hassenzahl. 2012. Towards happiness: Possibility-driven design. In Human-Computer Interaction: The Agency Perspective, MarielbaZacarias and José Valente de Oliveira (Eds.). Springer, Berlin, Germany, 3–27.

[38] Daniella DiPaola, Blakeley H. Payne, and Cynthia Breazeal. 2020. DecodingDesign Agendas: An Ethical Design Activity for Middle School Students. InProceedings of the Interaction Design and Children Conference (London, UnitedKingdom) (IDC ’20). Association for Computing Machinery, New York, NY, USA,1–10. https://doi.org/10.1145/3392063.3394396

[39] Charles Ess. 2006. Ethical pluralism and global information ethics. Ethics andInformation Technology 8, 4 (2006), 215–226.

[40] Alexander Faaborg and Henry Lieberman. 2006. A Goal-Oriented Web Browser.Association for Computing Machinery, New York, NY, USA, 751–760. https://doi.org/10.1145/1124772.1124883

[41] Casey Fiesler, Natalie Garrett, and Nathan Beard. 2020. What DoWe TeachWhenWe Teach Tech Ethics? A Syllabi Analysis. In Proceedings of the 51st ACM Techni-cal Symposium on Computer Science Education (Portland, OR, USA) (SIGCSE’20). Association for Computing Machinery, New York, NY, USA, 289–295.https://doi.org/10.1145/3328778.3366825

[42] Rowanne Fleck and Geraldine Fitzpatrick. 2010. Reflecting on Reflection: Fram-ing a Design Landscape. In Proceedings of the 22Nd Conference of the Computer-Human Interaction Special Interest Group of Australia on Computer-Human Inter-action (Brisbane, Australia) (OZCHI ’10). ACM, New York, NY, USA, 216–223.https://doi.org/10.1145/1952222.1952269

[43] Luciano Floridi. 2013. The ethics of information. Oxford University Press, Oxford,UK.

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

[44] M. Foucault. 1988. Technologies of the self. In Technologies of the self: Aseminar with Michel Foucault, L. H. Martin, H. Gutman, and P. H. Hutton (Eds.).University of Massachusetts Press, Amherst, MA, 16–49.

[45] Batya Friedman and David G Hendry. 2019. Value sensitive design: Shapingtechnology with moral imagination. MIT Press, Cambridge, MA.

[46] Batya Friedman, Peter Kahn, and Alan Borning. 2002. Value sensitive design:Theory and methods. (December 2002), 12 pages. https://faculty.washington.edu/pkahn/articles/vsd-theory-methods-tr.pdf

[47] J. Christian Gerdes. 2020. The Virtues of Automated Vehicle Safety - MappingVehicle Safety Approaches to Their Underlying Ethical Frameworks. In 2020 IEEEIntelligent Vehicles Symposium (IV). 107–113. https://doi.org/10.1109/IV47402.2020.9304583

[48] Eva Geurts, Fanny Van Geel, Peter Feys, and Karin Coninx. 2019. WalkWithMe:Personalized Goal Setting and Coaching for Walking in People with MultipleSclerosis. In Proceedings of the 27th ACMConference on User Modeling, Adaptationand Personalization (Larnaca, Cyprus) (UMAP ’19). Association for ComputingMachinery, New York, NY, USA, 51–60. https://doi.org/10.1145/3320435.3320459

[49] Andrew Gibson, Adam Aitken, Ágnes Sándor, Simon Buckingham Shum, CherieTsingos-Lucas, and Simon Knight. 2017. Reflective Writing Analytics for Ac-tionable Feedback. In Proceedings of the Seventh International Learning Analytics&#38; Knowledge Conference (Vancouver, British Columbia, Canada) (LAK ’17).ACM, New York, NY, USA, 153–162. https://doi.org/10.1145/3027385.3027436

[50] Anna Gjika. 2020. New media, old paradigms: News representa-tions of technology in adolescent sexual assault. Crime, Media, Cul-ture 16, 3 (2020), 415–430. https://doi.org/10.1177/1741659019873758arXiv:https://doi.org/10.1177/1741659019873758

[51] Kazjon Grace, Stephanie Grace, Mary Lou Maher, Mohammad Javad Mah-zoon, Lina Lee, Lilla LoCurto, and Bill Outcault. 2017. The Willful Mari-onette: Exploring Responses to Embodied Interaction. In Proceedings of the2017 ACM SIGCHI Conference on Creativity and Cognition (Singapore, Singapore)(C&C ’17). Association for Computing Machinery, New York, NY, USA, 15–27.https://doi.org/10.1145/3059454.3059475

[52] Colin M Gray. 2019. Revealing Students’ Ethical Awareness during ProblemFraming. International Journal of Art & Design Education 38, 2 (2019), 299–313.

[53] Colin M. Gray and Shruthi Sai Chivukula. 2019. Ethical Mediation in UX Practice.Association for Computing Machinery, New York, NY, USA, 1–11. https://doi.org/10.1145/3290605.3300408

[54] Colin M Gray, Cesur Dagli, Muruvvet Demiral-Uzan, Funda Ergulec, Verily Tan,Abdullah A Altuwaijri, Khendum Gyabak, Megan Hilligoss, Remzi Kizilboga,Kei Tomita, et al. 2015. Judgment and instructional design: How ID practitionerswork in practice. Performance Improvement Quarterly 28, 3 (2015), 25–49.

[55] Colin M Gray, Yubo Kou, Bryan Battles, Joseph Hoggatt, and Austin L Toombs.2018. The dark (patterns) side of UX design. In Proceedings of the 2018 CHIConference on Human Factors in Computing Systems. 1–14.

[56] Mary L Gray and Siddharth Suri. 2019. Ghost work: How to stop Silicon Valleyfrom building a new global underclass. Eamon Dolan Books.

[57] Barbara Grosse-Hering, Jon Mason, Dzmitry Aliakseyeu, Conny Bakker, andPieter Desmet. 2013. Slow design for meaningful interactions. In Proceedings ofthe SIGCHI Conference on Human Factors in Computing Systems (Paris, France)(CHI ’13). ACM, New York, NY, 3431–3440. https://doi.org/10.1145/2470654.2466472

[58] Jonathan Grudin. 1990. The Computer Reaches out: The Historical Continuityof Interface Design. In Proceedings of the SIGCHI Conference on Human Factorsin Computing Systems (Seattle, Washington, USA) (CHI ’90). ACM, New York,NY, USA, 261–268. https://doi.org/10.1145/97243.97284

[59] Seda Gürses, Rebekah Overdorf, and Ero Balsa. 2018. POTs: The revolution willnot be optimized? Workshop on Hot Topics in Privacy Enhancing Technologies(2018), 3. arXiv:arXiv:1806.02711 https://petsymposium.org/2018/files/hotpets/3-gurses.pdf

[60] Lars Hallnäs and Johan Redström. 2001. Slow technology–designing for reflec-tion. Personal and Ubiquitous Computing 5, 3 (2001), 201–212.

[61] Gilbert Harman. 1999. Moral philosophy meets social psychology: Virtue ethicsand the fundamental attribution error. In Proceedings of the Aristotelian society.JSTOR, 315–331.

[62] Steve Harrison, Phoebe Sengers, and Deborah Tatar. 2011. Making epistemo-logical trouble: Third-paradigm HCI as successor science. Interacting withComputers 23, 5 (2011), 385–392.

[63] Richard Heersmink. 2018. A virtue epistemology of the Internet: Search engines,intellectual virtues and education. Social Epistemology 32, 1 (2018), 1–12.

[64] Marcia Homiak. 2019. Moral character. In The Stanford Encyclopedia of Phi-losophy, Edward N. Zalta (Ed.). Stanford University, Stanford, CA. https://plato.stanford.edu/archives/sum2019/entries/moral-character/

[65] Iris Howley, Takayuki Kanda, Kotaro Hayashi, and Carolyn Rosé. 2014. Effectsof Social Presence and Social Role on Help-Seeking and Learning. In Proceedingsof the 2014 ACM/IEEE International Conference on Human-Robot Interaction(Bielefeld, Germany) (HRI ’14). Association for Computing Machinery, NewYork, NY, USA, 415–422. https://doi.org/10.1145/2559636.2559667

[66] Andreas Huber, Astrid Weiss, and Marjo Rauhala. 2016. The Ethical Risk ofAttachment: How to Identify, Investigate and Predict Potential Ethical Risks inthe Development of Social Companion Robots. In The Eleventh ACM/IEEE Inter-national Conference on Human Robot Interaction (Christchurch, New Zealand)(HRI ’16). IEEE Press, 367–374.

[67] Rosalind Hursthouse and Glen Pettigrove. 2018. Virtue ethics. In The StanfordEncyclopedia of Philosophy, Edward N. Zalta (Ed.). Stanford University, Stanford,CA. https://plato.stanford.edu/archives/win2018/entries/ethics-virtue/

[68] Anthony Jameson, Bettina Berendt, Silvia Gabrielli, Federica Cena, CristinaGena, Fabiana Vernero, and Katharina Reinecke. 2014. Choice architecturefor human-computer interaction. Foundations and Trends in Human-ComputerInteraction 7, 1–2 (2014), 1–235.

[69] Mathias Jesse and Dietmar Jannach. 2021. Digital nudging with recommendersystems: Survey and future directions. Computers in Human Behavior Reports 3(2021), 100052. https://doi.org/10.1016/j.chbr.2020.100052

[70] Hans Jonas. 1985. The imperative of responsibility: In search of an ethics for thetechnological age. University of Chicago Press, Chicago, IL.

[71] Boyoung Kim, Ruchen Wen, Qin Zhu, Tom Williams, and Elizabeth Phillips.2021. Robots as Moral Advisors: The Effects of Deontological, Virtue, andConfucian Role Ethics on Encouraging Honest Behavior. In Companion of the2021 ACM/IEEE International Conference on Human-Robot Interaction (Boulder,CO, USA) (HRI ’21 Companion). Association for Computing Machinery, NewYork, NY, USA, 10–18. https://doi.org/10.1145/3434074.3446908

[72] Suntae Kim, Matthew J Karlesky, Christopher G Myers, and Todd Schifeling.2016. Why companies are becoming B corporations. Harvard Business Review17 (2016), 2–5.

[73] Stephen Kimani, Nilufar Baghaei, Jill Freyne, Shlomo Berkovsky, Dipak Bhandari,and Greg Smith. 2010. Gender and Role Differences in Family-Based HealthyLiving Networks. Association for Computing Machinery, New York, NY, USA,4219–4224. https://doi.org/10.1145/1753846.1754129

[74] Reuben Kirkham. 2020. Using European Human Rights Jurisprudence forIncorporating Values into Design. In Proceedings of the 2020 ACM Design-ing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). As-sociation for Computing Machinery, New York, NY, USA, 115–128. https://doi.org/10.1145/3357236.3395539

[75] Chrysanthi Konstanti and Evangelos Karapanos. 2018. An Inquiry into Goal-Setting Practices with Physical Activity Trackers. In Extended Abstracts of the2018 CHI Conference on Human Factors in Computing Systems (Montreal QC,Canada) (CHI EA ’18). Association for Computing Machinery, New York, NY,USA, 1–6. https://doi.org/10.1145/3170427.3188663

[76] Shiri Kremer-Davidson, Inbal Ronen, Avi Kaplan, and Maya Barnea. 2017. "Per-sonal Social Dashboard": A Tool for Measuring Your Social Engagement Ef-fectiveness in the Enterprise. In Proceedings of the 25th Conference on UserModeling, Adaptation and Personalization (Bratislava, Slovakia) (UMAP ’17).Association for Computing Machinery, New York, NY, USA, 122–130. https://doi.org/10.1145/3079628.3079664

[77] Ryan Lange, Nicholas David Bowman, Jaime Banks, and Amanda Lange. 2015.Grand Theft Auto(mation). International Journal of Technology and HumanInteraction 11, 3 (April 2015), 35–50.

[78] Nancy K Lankton, D Harrison McKnight, and Jason Bennett Thatcher. 2012.The Moderating Effects of Privacy Restrictiveness and Experience on TrustingBeliefs and Habit: An Empirical Test of Intention to Continue Using a SocialNetworking Website. IEEE Transactions on Engineering Management 59, 4 (Oct.2012), 654–665.

[79] Joseph Lawrance, Margaret Burnett, Rachel Bellamy, Christopher Bogart, andCalvin Swart. 2010. Reactive Information Foraging for Evolving Goals. Associationfor Computing Machinery, New York, NY, USA, 25–34. https://doi.org/10.1145/1753326.1753332

[80] Thibault Le Texier. 2019. Debunking the Stanford Prison Experiment. AmericanPsychologist 74, 7 (2019), 823.

[81] JCR Licklider. 1960. Man–computer symbiosis. IRE transactions on humanfactors in electronics 1 (1960), 4–11.

[82] Ann Light, Alison Powell, and Irina Shklovski. 2017. Design for Existential Crisisin the Anthropocene Age. In Proceedings of the 8th International Conference onCommunities and Technologies (Troyes, France) (C&#38;T ’17). ACM, New York,NY, USA, 270–279. https://doi.org/10.1145/3083671.3083688

[83] Yiling Lin, Magda Osman, and Richard Ashcroft. 2017. Nudge: concept, effec-tiveness, and ethics. Basic and Applied Social Psychology 39, 6 (2017), 293–306.

[84] Alasdair MacIntyre. 1981. After Virtue: A Study in Moral Theory. University ofNotre Dame Press, Notre Dame, IN.

[85] Nathan Magyar, Xuenan Xu, and Molly Maher. 2020. Creating and Evaluatinga Goal Setting Prototype for MOOCs. In Extended Abstracts of the 2020 CHIConference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHIEA ’20). Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3375229

[86] Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makesa Dark Pattern... Dark? Design Attributes, Normative Considerations, and Mea-surement Methods. Association for Computing Machinery, New York, NY, USA.

Conference’17, July 2017, Washington, DC, USA Tim Gorichanaz

https://doi.org/10.1145/3411764.3445610[87] Roisin McNaney, Christopher Bull, Lynne Mackie, Floriane Dahman, Helen

Stringer, Dan Richardson, and Daniel Welsh. 2018. StammerApp: Designing aMobile Application to Support Self-Reflection and Goal Setting for People WhoStammer. Association for Computing Machinery, New York, NY, USA, 1–12.https://doi.org/10.1145/3173574.3173841

[88] Joshua McVeigh-Schultz, Max Kreminski, Keshav Prasad, Perry Hoberman,and Scott S. Fisher. 2018. Immersive Design Fiction: Using VR to PrototypeSpeculative Interfaces and Interaction Rituals Within a Virtual Storyworld. InProceedings of the 2018 Designing Interactive Systems Conference (Hong Kong,China) (DIS ’18). ACM, New York, NY, USA, 817–829. https://doi.org/10.1145/3196709.3196793

[89] C Lawrence Meador and David N Ness. 1973. A case study of the use of adecision support system. (1973).

[90] Elisa D. Mekler and Kasper Hornbæk. 2019. A Framework for the Experienceof Meaning in Human-Computer Interaction. In Proceedings of the 2019 CHIConference on Human Factors in Computing Systems (Glasgow, Scotland Uk)(CHI ’19). ACM, New York, NY, USA, Article 225, 15 pages. https://doi.org/10.1145/3290605.3300455

[91] John Stuart Mill. 2001. Utilitarianism. Hackett, Indianapolis, IN.[92] David E. Millard, Sarah Hewitt, Kieron O’Hara, Heather Packer, and Neil Rogers.

2019. The Unethical Future of Mixed Reality Storytelling. In Proceedings ofthe 8th International Workshop on Narrative and Hypertext (Hof, Germany)(NHT ’19). Association for Computing Machinery, New York, NY, USA, 5–8.https://doi.org/10.1145/3345511.3349283

[93] Elliot G. Mitchell, Elizabeth M. Heitkemper, Marissa Burgermaster, MatthewE. Levine, Yishen Miao, Maria L. Hwang, Pooja M. Desai, Andrea Cassells,Jonathan N. Tobin, Esteban G. Tabak, David J. Albers, Arlene M. Smaldone, andLena Mamykina. 2021. From Reflection to Action: Combining Machine Learningwith Expert Knowledge for Nutrition Goal Recommendations. Association forComputing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445555

[94] Ine Mols, Elise van den Hoven, and Berry Eggen. 2016. Technologies for Ev-eryday Life Reflection: Illustrating a Design Space. In Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Inter-action (Eindhoven, Netherlands) (TEI ’16). ACM, New York, NY, USA, 53–61.https://doi.org/10.1145/2839462.2839466

[95] G. Moriarty. 2001. Three kinds of ethics for three kinds of engineering. IEEETechnology and Society Magazine 20, 3 (Fall 2001), 31–38. https://doi.org/10.1109/44.952763

[96] Evgeny Morozov. 2013. To Save Everything, Click Here: The Folly of TechnologicalSolutionism. PublicAffairs, New York, NY.

[97] SeanA.Munson and Sunny Consolvo. 2012. Exploring goal-setting, rewards, self-monitoring, and sharing to motivate physical activity. In 2012 6th InternationalConference on Pervasive Computing Technologies for Healthcare (PervasiveHealth)and Workshops. 25–32. https://doi.org/10.4108/icst.pervasivehealth.2012.248691

[98] Jakob Nielsen. 2012. Usability 101: Introduction to Usability. (2012). https://www.nngroup.com/articles/usability-101-introduction-to-usability/

[99] Jasmin Niess and Paweł W. Woundefinedniak. 2018. Supporting MeaningfulPersonal Fitness: The Tracker Goal Evolution Model. Association for ComputingMachinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3173574.3173745

[100] Helen Nissenbaum. 2001. How computer systems embody values. Computer 34,3 (2001), 120–119.

[101] Jacqueline Novogratz. 2020. Manifesto for a moral revolution: Practices to build abetter world. Henry Holt.

[102] William Odom, Ron Wakkary, Jeroen Hol, Bram Naus, Pepijn Verburg, TalAmram, and Amy Yo Sue Chen. 2019. Investigating Slowness As a Frame toDesign Longer-Term Experiences with Personal Data: A Field Study of Olly. InProceedings of the 2019 CHI Conference on Human Factors in Computing Systems(Glasgow, Scotland Uk) (CHI ’19). ACM, New York, NY, USA, Article 34, 16 pages.https://doi.org/10.1145/3290605.3300264

[103] Leila Ouchchy, Allen Coin, and Veljko Dubljević. 2020. AI in the headlines: theportrayal of the ethical issues of artificial intelligence in themedia. AI& SOCIETY35, 4 (01 Dec 2020), 927–936. https://doi.org/10.1007/s00146-020-00965-5

[104] Fatih Kursat Ozenc and Margaret Hagan. 2018. Ritual Design: Crafting TeamRituals for Meaningful Organizational Change. In Advances in Affective andPleasurable Design, WonJoon Chung and Cliff Sungsoo Shin (Eds.). SpringerInternational Publishing, Cham, Switzerland, 146–157.

[105] Panagiotis Papapetrou and George Roussos. 2014. Social Context Discovery fromTemporal App Use Patterns. In Proceedings of the 2014 ACM International JointConference on Pervasive and Ubiquitous Computing: Adjunct Publication (Seattle,Washington) (UbiComp ’14 Adjunct). Association for Computing Machinery,New York, NY, USA, 397–402. https://doi.org/10.1145/2638728.2641699

[106] Jessica A. Pater, Lauren E. Reining, Andrew D. Miller, Tammy Toscos, and Eliza-beth D. Mynatt. 2019. "Notjustgirls": Exploring Male-Related Eating DisorderedContent across Social Media Platforms. Association for Computing Machinery,New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300881

[107] Brian T. Pentland and Martha S. Feldman. 2008. Designing routines: On thefolly of designing artifacts, while hoping for patterns of action. Information andOrganization 18, 4 (2008), 235–250. https://doi.org/10.1016/j.infoandorg.2008.08.001

[108] Sarah Perez. [n.d.]. Instagram’s new test lets you choose if youwant to hide ‘Likes,’ Facebook test to follow. TechCrunch ([n. d.]).https://techcrunch.com/2021/04/14/instagrams-new-test-lets-you-choose-if-you-want-to-hide-likes-facebook-test-to-follow/

[109] Rifca Peters, Joost Broekens, and Mark A. Neerincx. 2017. Guidelines forTree-Based Collaborative Goal Setting. In Proceedings of the 22nd Interna-tional Conference on Intelligent User Interfaces (Limassol, Cyprus) (IUI ’17). As-sociation for Computing Machinery, New York, NY, USA, 401–405. https://doi.org/10.1145/3025171.3025188

[110] Daniela Petrelli and Ann Light. 2014. Family Rituals and the Potential forInteraction Design: A Study of Christmas. ACM Transactions on Computer-Human Interaction 21, 3, Article 16 (June 2014), 29 pages. https://doi.org/10.1145/2617571

[111] Lara S. G. Piccolo, Diotima Bertel, Tracie Farrell, and Pinelopi Troullinou. 2021.Opinions, Intentions, Freedom of Expression, ... , and Other Human Aspects ofMisinformation Online. Association for Computing Machinery, New York, NY,USA. https://doi-org.ezproxy2.library.drexel.edu/10.1145/3411763.3441345

[112] Olga Pierrakos, Mike Prentice, Cameron Silverglate, Michael Lamb, Alana De-maske, and Ryan Smout. 2019. Reimagining Engineering Ethics: From EthicsEducation to Character Education. In 2019 IEEE Frontiers in Education Conference(FIE). 1–9. https://doi.org/10.1109/FIE43999.2019.9028690

[113] Charlie Pinder, Jo Vermeulen, Benjamin R. Cowan, and Russell Beale. 2018.Digital Behaviour Change Interventions to Break and Form Habits. ACM Trans-actions on Computer-Human Interaction 25, 3, Article 15 (June 2018), 66 pages.https://doi.org/10.1145/3196830

[114] Adam Poulsen, Oliver K. Burmeister, and David Tien. 2018. Care Robot Trans-parency Isn’t Enough for Trust. In 2018 IEEE Region Ten Symposium (Tensymp).293–297. https://doi.org/10.1109/TENCONSpring.2018.8692047

[115] J. Rachels and S. Rachels. 2015. The Elements of Moral Philosophy. McGraw-Hill,New York, NY.

[116] Wessel Reijers and Bert Gordijn. 2019. Moving from value sensitive design tovirtuous practice design. Journal of Information, Communication and Ethics inSociety 17, 2 (01 Jan 2019), 196–209. https://doi.org/10.1108/JICES-10-2018-0080

[117] Claudia Ed Roda. 2011. Human attention and its implications for human-computerinteraction. Cambridge University Press.

[118] Leonardo Rundo, Roberto Pirrone, Salvatore Vitabile, Evis Sala, and OrazioGambino. 2020. Recent advances of HCI in decision-making tasks for optimizedclinical workflows and precision medicine. Journal of biomedical informatics108 (Aug. 2020), 103479.

[119] Herman Saksono and Andrea G. Parker. 2017. Reflective Informatics ThroughFamily Storytelling: Self-discovering Physical Activity Predictors. In Proceedingsof the 2017 CHI Conference on Human Factors in Computing Systems (Denver,Colorado, USA) (CHI ’17). ACM, New York, NY, USA, 5232–5244. https://doi.org/10.1145/3025453.3025651

[120] Corina Sas and Rohit Chopra. 2015. MeditAid: a wearable adaptiveneurofeedback-based system for training mindfulness state. Personal and Ubiqui-tous Computing 19, 7 (01 Oct 2015), 1169–1182. https://doi.org/10.1007/s00779-015-0870-z

[121] Eike Schneiders, Mikkel Bjerregaard Kristensen, Michael Kvist Svangren, andMikael B Skov. 2020. Temporal Impact on Cognitive Distraction Detection forCar Drivers using EEG. In 32nd Australian Conference on Human-ComputerInteraction. 594–601.

[122] Donald A Schon. 1984. The reflective practitioner: How professionals think inaction. Vol. 5126. Basic books, New York, NY.

[123] Michelle Scott, Mary Barreto, Filipe Quintal, and Ian Oakley. 2011. Understand-ing Goal Setting Behavior in the Context of Energy Consumption Reduction.In Human-Computer Interaction – INTERACT 2011, Pedro Campos, NicholasGraham, Joaquim Jorge, Nuno Nunes, Philippe Palanque, and Marco Winckler(Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 129–143.

[124] Evan Selinger and Kyle Whyte. 2011. Is there a right way to nudge? The practiceand ethics of choice architecture. Sociology Compass 5, 10 (2011), 923–935.

[125] Ben Rydal Shapiro, Amanda Meng, Cody O’Donnell, Charlotte Lou, EdwinZhao, Bianca Dankwa, and Andrew Hostetler. 2020. Re-Shape: A Methodto Teach Data Ethics for Data Science Education. In Proceedings of the 2020CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA)(CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13.https://doi.org/10.1145/3313831.3376251

[126] Katie Shilton. 2018. Values and ethics in human-computer interaction. Founda-tions and Trends® in Human–Computer Interaction 12, 2 (2018).

[127] Frank M. Shipman and Catherine C. Marshall. 2020. Ownership, Privacy, andControl in the Wake of Cambridge Analytica: The Relationship between Attitudesand Awareness. Association for Computing Machinery, New York, NY, USA,1–12. https://doi.org/10.1145/3313831.3376662

Designing a Future Worth Wanting: Applying Virtue Ethics to HCI Conference’17, July 2017, Washington, DC, USA

[128] Jesse Singal. 2016. The Ethics of Invention: Technology and the Human Future.Norton, New York, NY.

[129] Jesse Singal. 2021. The Quick Fix: Why Fad Psychology Can’t Cure Our Social Ills.Farrar, Straus and Giroux, New York, NY.

[130] Peter Singer. 1981. The Expanding Circle: Ethics and Sociobiology. PrincetonUniversity Press, Princeton, NJ.

[131] Jacek Sliwinski, Mary Katsikitis, and Christian Martyn Jones. 2015. MindfulGaming: How Digital Games Can Improve Mindfulness. In Human-ComputerInteraction – INTERACT 2015, Julio Abascal, Simone Barbosa, Mirko Fetter, TomGross, Philippe Palanque, and Marco Winckler (Eds.). Springer InternationalPublishing, Cham, 167–184.

[132] Dustin A. Smith and Henry Lieberman. 2010. The Why UI: Using Goal Net-works to Improve User Interfaces. In Proceedings of the 15th InternationalConference on Intelligent User Interfaces (Hong Kong, China) (IUI ’10). As-sociation for Computing Machinery, New York, NY, USA, 377–380. https://doi.org/10.1145/1719970.1720035

[133] Robert Sparrow. 2016. Kicking a Robot Dog. In The Eleventh ACM/IEEE Inter-national Conference on Human Robot Interaction (Christchurch, New Zealand)(HRI ’16). IEEE Press, 229.

[134] Carmen Stavrositu and S Shyam Sundar. 2008. Can blogs empower women?Designing agency-enhancing and community-building interfaces. In CHI’08extended abstracts on Human factors in computing systems. 2781–2786.

[135] Marc Steen. 2013. Virtues in Participatory Design: Cooperation, Curiosity,Creativity, Empowerment and Reflexivity. Science and Engineering Ethics 19, 3(01 Sep 2013), 945–962. https://doi.org/10.1007/s11948-012-9380-9

[136] Christos Stergiou and Geert Arys. 2001. Policy Based Agent Management UsingConversation Patterns. In Proceedings of the Fifth International Conference onAutonomous Agents (Montreal, Quebec, Canada) (AGENTS ’01). Association forComputing Machinery, New York, NY, USA, 220–221. https://doi.org/10.1145/375735.376290

[137] Yolande Strengers, Jathan Sadowski, Zhuying Li, Anna Shimshak, and Florian’Floyd’ Mueller. 2021. What Can HCI Learn from Sexual Consent? A FeministProcess of Embodied Consent for Interactions with Emerging Technologies. InProceedings of the 2021 CHI Conference on Human Factors in Computing Systems(Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York,NY, USA, Article 405, 13 pages. https://doi.org/10.1145/3411764.3445107

[138] Norman Makoto Su, Oliver Brdiczka, and Bo Begole. 2013. The Routineness ofRoutines: Measuring Rhythms of Media Interaction. Human-Computer Interac-tion 28, 4 (July 2013), 287–334.

[139] Norman Makoto Su, Victor Kaptelinin, Jeffrey Bardzell, Shaowen Bardzell, Jed RBrubaker, Ann Light, and Dag Svanæs. 2019. Standing on the Shoulders ofGiants: Exploring the Intersection of Philosophy and HCI. In Extended Abstractsof the 2019 CHI Conference on Human Factors in Computing Systems. 1–8.

[140] Richard H Thaler and Cass R Sunstein. [n.d.]. Nudge: Improving decisions abouthealth, wealth, and happiness. Yale University Press, New Haven, CT.

[141] Anja Thieme, Jayne Wallace, Paula Johnson, John McCarthy, Siân Lindley, PeterWright, Patrick Olivier, and Thomas D. Meyer. 2013. Design to Promote Mind-fulness Practice and Sense of Self for Vulnerable Women in Secure HospitalServices. In Proceedings of the SIGCHI Conference on Human Factors in Com-puting Systems (Paris, France) (CHI ’13). ACM, New York, NY, USA, 2647–2656.https://doi.org/10.1145/2470654.2481366

[142] Sarah M. Thornton, Selina Pan, Stephen M. Erlien, and J. Christian Gerdes. 2017.Incorporating Ethical Considerations Into Automated Vehicle Control. IEEETransactions on Intelligent Transportation Systems 18, 6 (June 2017), 1429–1439.https://doi.org/10.1109/TITS.2016.2609339

[143] Peter Tolmie, James Pycock, Tim Diggins, Allan MacLean, and Alain Karsenty.2002. Unremarkable Computing. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems (Minneapolis, Minnesota, USA) (CHI ’02).ACM, New York, NY, USA, 399–406. https://doi.org/10.1145/503376.503448

[144] Gustavo F. Tondello, Hardy Premsukh, and Lennart Nacke. 2018. A theoryof gamification principles through goal-setting theory. Hawaii InternationalConference on System Sciences.

[145] Shannon Vallor. 2016. Technology and the virtues: A philosophical guide to afuture worth wanting. Oxford University Press.

[146] Niels van Berkel, Benjamin Tag, Jorge Goncalves, and Simo Hosio. 2020. Human-centred artificial intelligence: a contextual morality perspective. Behaviour &Information Technology 0, 0 (2020), 1–17. https://doi.org/10.1080/0144929X.2020.1818828 arXiv:https://doi.org/10.1080/0144929X.2020.1818828

[147] Jeroen Van den Hoven, Pieter Vermaas, and Ibo Van de Poel (Eds.). 2015. Hand-book of Ethics, Values and Technological Design. Springer, Dordrecht, Nether-lands.

[148] Charlotte van Hooijdonk. 2021. Chatbots in the Tourism Industry: The Effectsof Communication Style and Brand Familiarity on Social Presence and BrandAttitude. In Adjunct Proceedings of the 29th ACM Conference on User Modeling,Adaptation and Personalization (Utrecht, Netherlands) (UMAP ’21). Associationfor Computing Machinery, New York, NY, USA, 375–381. https://doi.org/10.1145/3450614.3463599

[149] James Vincent. [n.d.]. Twitter is bringing its ‘read before you retweet’ promptto all users. The Verge ([n. d.]). https://www.theverge.com/2020/9/25/21455635/twitter-read-before-you-tweet-article-prompt-rolling-out-globally-soon

[150] Stuart Walker. 2011. The Spirit of Design: Objects, Environment and Meaning.Earthscan, London, UK.

[151] Mark Weiser. 1991. The computer for the 21st century. Scientific American 265,3 (1991), 94–104.

[152] Ruchen Wen. 2021. Toward Hybrid Relational-Normative Models of RobotCognition. In Companion of the 2021 ACM/IEEE International Conference onHuman-Robot Interaction (Boulder, CO, USA) (HRI ’21 Companion). Associationfor Computing Machinery, New York, NY, USA, 568–570. https://doi.org/10.1145/3434074.3446353

[153] Christopher DWickens, Sallie E Gordon, Yili Liu, and J Lee. 2004. An introductionto human factors engineering. Vol. 2. Pearson Prentice Hall Upper Saddle River,NJ.

[154] Tom Williams, Qin Zhu, Ruchen Wen, and Ewart J. de Visser. 2020. The Confu-cian Matador: Three Defenses Against the Mechanical Bull. In Companion ofthe 2020 ACM/IEEE International Conference on Human-Robot Interaction (Cam-bridge, United Kingdom) (HRI ’20). Association for Computing Machinery, NewYork, NY, USA, 25–33. https://doi.org/10.1145/3371382.3380740

[155] Theresa Wilson and Gregor Hofer. 2011. Using Linguistic and Vocal Expres-siveness in Social Role Recognition. In Proceedings of the 16th InternationalConference on Intelligent User Interfaces (Palo Alto, CA, USA) (IUI ’11). As-sociation for Computing Machinery, New York, NY, USA, 419–422. https://doi.org/10.1145/1943403.1943480

[156] Allison Woodruff, James Landay, and Michael Stonebraker. 1998. Goal-DirectedZoom. In CHI 98 Conference Summary on Human Factors in Computing Systems(Los Angeles, California, USA) (CHI ’98). Association for Computing Machinery,New York, NY, USA, 305–306. https://doi.org/10.1145/286498.286781

[157] Jennifer ColeWright, Michael TWarren, andNancy E Snow. 2021. Understandingvirtue: Theory and measurement. Oxford University Press.

[158] Peter Wright and John McCarthy. 2010. Experience-Centered Design: Designers,Users, and Communities in Dialogue. Morgan and Claypool Publishers, SanRafael, CA.

[159] Diyi Yang, Robert E. Kraut, Tenbroeck Smith, Elijah Mayfield, and Dan Jurafsky.2019. Seekers, Providers, Welcomers, and Storytellers: Modeling Social Roles inOnline Health Communities. Association for Computing Machinery, New York,NY, USA, 1–14. https://doi.org/10.1145/3290605.3300574

[160] Amal Yassien, ElHassan B. Makled, Passant Elagroudy, Nouran Sadek, andSlim Abdennadher. 2021. Give-Me-A-Hand: The Effect of Partner’s Gender onCollaboration Quality in Virtual Reality. Association for Computing Machinery,New York, NY, USA. https://doi.org/10.1145/3411763.3451601

[161] Yuliang Yun, Dexin Ma, and Meihong Yang. 2021. Human–computer interaction-based Decision Support System with Applications in Data Mining. FutureGeneration Computer Systems 114 (2021), 285–289. https://doi.org/10.1016/j.future.2020.07.048

[162] Linda Zagzebski. 2017. Exemplarist moral theory. Oxford University Press,Oxford, UK.

[163] Haiyi Zhu, Robert Kraut, and Aniket Kittur. 2012. Organizing without For-mal Organization: Group Identification, Goal Setting and Social Modelingin Directing Online Production. In Proceedings of the ACM 2012 Conferenceon Computer Supported Cooperative Work (Seattle, Washington, USA) (CSCW’12). Association for Computing Machinery, New York, NY, USA, 935–944.https://doi.org/10.1145/2145204.2145344

[164] John Zoshak and Kristin Dew. 2021. Beyond Kant and Bentham: How EthicalTheories Are Being Used in Artificial Moral Agents. In Proceedings of the 2021CHI Conference on Human Factors in Computing Systems (Yokohama, Japan)(CHI ’21). Association for Computing Machinery, New York, NY, USA, Article590, 15 pages. https://doi.org/10.1145/3411764.3445102