Answers To Ethical Dilemmas That Occur In Software Development

10
Johnny Søraker, Assistant Professor at the University of Twente Answers To Ethical Dilemmas That Occur In Software Development Essay by James Piggott Date; February 2 nd 2014 Email; [email protected]

Transcript of Answers To Ethical Dilemmas That Occur In Software Development

Johnny Søraker, Assistant Professor at the University of Twente

Answers To Ethical Dilemmas That Occur In Software Development

Essay by James Piggott

Date; February 2nd 2014

Email; [email protected]

Johnny Søraker, Assistant Professor at the University of Twente

Introduction This paper hopes to answer a variety of ethical dilemmas that commonly occur in software

development. As software is used in many applications that may be hazardous if it is not implemented

properly there are questions regarding who is ethically responsible for the consequences. This paper

uses as an example scenario the development of a software tool used for simulating the demolition of

large buildings. Thus there is beyond any doubt a real danger that should a flaw in the implemented tool

remain undetected or unaltered that there are possible fatal consequences. To answer what the ethical

consequences are this paper has been divided into three parts. Part 1: Expert Knowledge raises the issue

of how extensive the preparations for software development needs to be and how you can use expert

knowledge without error or misunderstanding. Part 2: One Problem discusses who is responsible for

when the project leader for the team leaves and what measures need to be taken that any staff

transition any technical and ethical considerations. Part 3: Whistleblowing discusses what a person

should do when they have knowledge that a flawed product is being developed and when the rules

regarding whistleblowing apply. This paper concludes with remarks as to what a person involved with

the development of potentially dangerous software systems should do.

The scenario You’re working for a company that has gotten a contract for creating a software tool for simulating

demolition of large buildings. In this program, it must be possible to model (visualize) different kinds of

buildings, and simulate how different strategies for placing explosives etc. will determine how the

building is demolished. The tool must also be available on the Internet to allow multiple workers to work

on different aspects of a project simultaneously. The purpose of the program is to enable demolition

workers to increase the safety and accuracy of demolitions. You have been designated as project leader,

in charge of overseeing all aspects of the development process. Needless to say, a number of problems –

some of them with potentially catastrophic consequences – could occur.

Part 1 Expert Knowledge What kinds of preparations will you initiate before you start the development of the program itself

(delegation of work, finding necessary expertise, how to make sure expertise finds its way into code,

etc.). Most importantly, how can you ensure (as far as possible) that the experts’ knowledge makes its

way into the software without error or misunderstandings?

To answer this question completely a systematic approach would leave no room for accidental

omissions, errors or misunderstandings. Thus I believe that to solve the issue as best as possible it

should be constructed as a logical argument whereby the conclusion is the desired outcome (Tavani,

2004). After listing all possible influences as the premises of the argument they should be answered in

such a fashion that it would leave no doubt to anyone that they have been answered correctly and

truthfully. The following premises I believe would answer the conclusion correctly.

1. Does the mechanisms designed to protect from the harmful effects of delegation of work

comprehensively exclude them.

Johnny Søraker, Assistant Professor at the University of Twente

2. Does the design process ensure that all necessary technical expertise is consulted adequately?

3. Does the knowledge gained from experts make it into the code?

4. Is it excluded that expert knowledge is misunderstood or used in error?

Conclusion; the preparations for the program intended to demolish buildings are foolproof.

In order to ensure that the conclusion is true we must argue that each of the premises is true. Below in

each paragraph I summarize the necessary steps that a project leader must follow to ensure that he or

she has done everything right and ensured that all those associated with the project have done the

same.

Premise 1; Does the mechanisms designed to protect from the harmful effects of delegation of work

comprehensively exclude them.

Delegation of work can only be done if it is ensured that the person performing the job is fully qualified

to do so. Only then are managers and project leaders exonerated from blame. Of course the work that

is performed should still be checked, both in an automated fashion as well as in a comprehensive

manner by testers.

That the person to whom work is delegated is fully qualified needs to be verified by those persons who

otherwise would perform the job and who also delegate the work. If this is not the same person than

there is a risk that business priorities would see work delegated on the basis of meeting deadlines and

other performance goals that possible contradict safety goals.

No one person can write the code of a project of such a scale within a reasonable timeline. Thus the fact

that work is delegated is a foregone conclusion. However, testing cases for code preferable need to be

scrutinized by people who have intimate knowledge of the whole system and have knowledge of

possible risks that could arise when code is being worked on by different programmers.

This pattern of delegating work in large projects was addressed in the book ‘No Silver Bullet’ (1986) by

Computer Scientist Fred Brooks who concluded that the best method of obtaining a near foolproof

software tool is to ‘grow’ them. Essentially by early prototyping the software engineers would have

working code whose behavior can be judged as early as possible in the design cycle. This stands in

contrast with the Waterfall method whereby requirements are determined completely beforehand

before actual implementation. The code that software engineers work on won’t be fully operational

until everything has been implemented. This method of programming would according to Fred Brooks

needlessly complicate the code.

The answer to this premise rests a lot on the best practices used within the field of Software

Engineering, as such even with correct comprehensive methods harmful effects of delegation are never

excluded.

Premise 2; Does the design process ensure that all necessary technical expertise is consulted adequately?

Johnny Søraker, Assistant Professor at the University of Twente

A list of all possible influences that can cause a faulty software product to be delivered needs to be

drawn up with the help of technical experts. For each such issue a technical expert needs to be

consulted throughout the entire software development process; from design, formal testing

specification, coding and code testing. All involved should be allowed to voice concerns for more

possible influences and such concerns should not be ridiculed or dismissed. The latter can become a

problem when management tries to reassert the importance of maintaining deadlines and profit targets.

To ensure that expert have been consulted adequately their concerns needs to be transformed into

formal design specification that can be used to test the final code. This method of drawing up test cases

before coding ensures that test cases are not watered down to meet deadlines. Thus idealistic goals

produced by designers and experts will be maintained into the final product.

Premise 3; Does the knowledge gained from experts make it into the code?

This premise is directly tied to the possible problems that were identified in the case text. Issues such as

incorrect variable information and misleading user interface information are possible because

knowledge from experts on explosives, industrial software systems and user interfaces was not properly

integrated into the code. A recent software coding process such a ‘pair programming’ involves at least

two programmers working on a piece of code thereby reducing code flaws because there is immediate

peer review. However, this does not solve the problem of expert knowledge not being implemented or

done so wrongly. Instead, pair programming does start not with coding but setting up of the test cases

that the code needs to fully comply with. As such expert knowledge and critical issues are defined

before the technical phase of coding begins. This first stage is also suited for building documentation

and ensuring that all possible scenarios are covered by the final code.

Premise 4; Is it excluded that expert knowledge is misunderstood or used in error?

The expert knowledge that is transformed into test cases for the software need to be vigorously

reviewed by the experts themselves as well as by outsiders. These test cases can become public

knowledge as the final product merely needs to conform to them. However, between technical expert

and programmers there can be misunderstanding due to a difference in semantics. For as much as

possible technical expertise needs to be transformed into formal mathematical statements that all can

agree on.

Ultimately the system is only as safe as the technical knowledge given by experts, it can never be safer.

It can also never be excluded that even if knowledge experts directly implement parts of the system

themselves that they would make a mistake such as not anticipating dangerous circumstances arising

from unexpected variable values. Such fringe cases cannot be excluded for any system in the world.

Failure does lead to additional knowledge for experts.

To conclude it is impossible to judge whether even if all these premises are true that the conclusion will

also be true. It is also indeterminable whether there are more premises that need to be included to

proof the conclusion. This problem was addressed by Ludwig Wittgenstein in his seminal philosophical

work Tractatus Logico-Philosophicus (Wittgenstein, 1961) published in 1921. Later on in the 1930’s the

mathematician Kurt Gödel proved with his Incompleteness Theorem (Godel, 1992) that within a branch

Johnny Søraker, Assistant Professor at the University of Twente

of mathematics there would always be some propositions that couldn’t be proven either true or false

using the rules and axioms of that mathematical branch itself. With our use of logical arguments we can

only conclude that we do not know if we have delivered a strong argument that would proof the

conclusion. A better question would be if we should take the risk of implementing such a software

system knowing that there could be dangerous consequences. For some the use of nuclear energy is

utterly reprehensible even though the chance of a catastrophic accident with the latest designs of

nuclear reactors is statistically insignificant. An accident can also contribute to additional knowledge

making them even safer and unlike a nuclear meltdown even with the largest demolition contracts the

consequences will be highly localized.

Part 2 One problem A common consequence of a very mobile workforce is a high turnover of technically educated employees.

One such consequence is a change of project leader. What if you are such a project leader and you have

decided to quit your present job to either find new opportunities or go with retirement but you are aware

of a possible safety issue with the project that your about to leave. What responsibility do you keep with

regards to safety after you have left?. Should you make certain that your workplace successor shares

your worries and has the same expertise that you have? This chapter will discuss the following three

questions raised by this scenario. a) what is the cause of the problem, b) how could the problems have

been avoided, and c) who is legally and/or ethically responsible for the negative consequences.

What is the cause of the problem? A change of project leader brings with it more than a new person of different expert knowledge and

project knowledge. The replacement project leader may have diminished influence amongst his co-

workers, management and with the client. As such when issues that could potentially be dangerous

arise he may not be in a position to challenge those factions for danger of losing his already precarious

position. This attitude of not wanting to ‘rock the boat’ has led to innumerable industrial accidents such

as those with the Therac-25 radiation therapy machine (Leveson, 1995) and runs counter with the age

old dictum ‘All boats rock’. A second consequence regarding the replacement is that the new project

leader may not have the technical skills to fully judge the dangers that may arise when the project has

been completed. Are they capable of anticipating friction between co-workers? Recognize when

programmers have insufficient skill and judge technical feedback?

All these possible shortcoming should give present project leaders thought as to whether they might be

ethically and legally responsible for any future mishap with a system they have helped develop.

How could the problem be avoided? To ensure that no problems arises from a change of project leader the current project leader has to

ensure that their successor has the required technical knowledge in this field and of the project and has

a similar position of authority vis-à-vis management, the client and his team. To ensure that this is the

case is almost impossible, not in the least because trust and authority need to be build up over time and

by default a project leader’s replacement has spent less time gaining such confidence. The current

project leader should work-in his replacement over a period so they can close the confidence gap as

Johnny Søraker, Assistant Professor at the University of Twente

much as possible. Even then current project leader may need to remain abreast of the projects progress

beyond their employment time.

Who is legally and/or ethically responsible for the negative consequences? The ex-project leader perhaps only has very limited legal responsibility. The company that produces the

faulty product or software is ultimately responsible as are the individual employees that have worked on

the code that was incorrectly implemented. However, in some jurisdictions blame can be assigned on

the basis of responsibility. The 1985 EC Council Directive of what is now the European Union states that

an act or omission from a third party besides the injured party and the producer also have a potential

liability. A lack of scientific or technical knowledge may only exonerate the producer if he can prove that

such knowledge at the time could not have discovered the existence of the defect. Such a proof is

almost impossible to come by for a software project whose greatest source of danger are poor design

choices and unsolved bugs.

Regardless, ethical responsibility remain as the project leader is the person who had central knowledge

of all aspects of the project. The only way to be absolved of such ethical liability is to ensure that the

next project leader occupies a similar position within the project as you did regarding expert knowledge,

influence with management and customers as well as a similar rigor and attitude towards finding and

solving problems. This attitude is at least mentioned in de ACM/IEEE Software Engineering Code of

Ethics. However, as mentioned by Schiff (2013) it lacks teeth in that a code of ethics has no legal bearing

in disciplinary matters. The Code Of Ethics can be consulted as part of the ethical deliberation process

should any software engineer come across such problems.

Part 3 Whistleblowing Just before deadline, you discover that the program gives very unrealistic results at particular values of

atmospheric pressure and you confront the boss with this. The boss states, rightly, that there is a very

small chance of this problem occurring during a demolition (perhaps 0.001% chance), and further says

that it is impossible to delay the project to fix the problem. The boss also says that there is no reason to

issue a patch later since the likelihood of catastrophe is so small. How do you respond to this (e.g. whistle

blowing)?

Introduction The proposed situation described above indicates that despite everybody’s best efforts a flaw has

nonetheless crept into the software during its development phase. This particular flaw may only cause a

problem in 0.001% of demolitions but such a number may seem deceivingly low. A number of

considerations have to be taken into account to determine whether it should be addressed.

Argument 1; even if there is only a 0.001% of an accident, there is still a 1/100.000 that something could

go wrong, the manager using this figure to justify doing nothing has probably only considered the

implication of this chance for himself. The software product may instead become the industry standard

and be used by many companies over a period of decades. As such it is statistically possible that more

than one accident could occur. In the United States alone there are 27.000 people working in the

Johnny Søraker, Assistant Professor at the University of Twente

demolition industry (IBISWorld, 2013). As the flaw produces anomalous values of atmospheric pressure

the elaborate safety procedures undertaken in the demolition industry are undermined.

Argument 2; as a flaw has been identified we are obligated to fixing it. This flaw may hide other not

previously found. Also the existence of this flaw indicates the possibility that the software product is

flawed in other aspects. By fixing this flaw we learn more about the software. Its continued existence

shows our lack of knowhow.

Argument 3; we may be legally obligated to not release a software product if we know that it is flawed

and can lead to dangerous situations. This legal obligation is based on ethical arguments as found in EEC

directive (85/374/EEC).

Based on the above three argument a person who is aware of the problem should make sure that it is

addressed. Presumably from the bleak picture painted of the manager in charge it can be ruled out that

the company producing the demolition software will rectify the flaw on their own. The only ethical

course of action would be to reveal the information to authorities and the wider world to force the

company to fix it. Such whistleblowing brings with it many legal and ethical obligations on those persons

revealing it while it may fail to actually cause a change.

Whistleblowing Whistleblowing is one possible solution to try and rectify the problem that has been identified.

However, it carries a lot of risk for the person who notifies the proper authorities. Davis (1996)

identified what he called the standard theory

Standard theory

This theory states that disloyalty to an organization is permissible when.

S1; The organization to which the would-be whistleblower belongs will, though its product or

policy, do serious and considerable harm to the public.

S2; The would-be whistleblower has identified that threat of harm, reported it to het immediate

superior, making clear both the threat itself and the objection to it, and concluded that the

superior will do nothing effective;

S3; The would-be whistleblower has exhausted other internal procedures within the

organization, or at least made use of as many internal procures as the danger to others and her

own safety make reasonable.

Whistleblowing is morally required when in addition

S4; The would-be whistleblower has evidence that would convince a reasonable, impartial

observer that her view of the threat is correct.

S5; The would-be whistleblower has good reason to believe that revealing the threat will

prevent the harm at reasonable cost.

Davis himself noted the flaws in the standard theory as he believed that only people who are closely

connected with the source of ‘wrongdoing’ have a greater justification to perform whistleblowing than

Johnny Søraker, Assistant Professor at the University of Twente

those who happen to retrieve sensitive information through other means, he referred to this as the

‘paradox of burden’. The standard theory also did not take into account that whistleblowing can be

justified after the harmful events have already transpired as was the case with Challenger disaster, he

refers to this as the ‘paradox of harm’. Davis sought an amendment of the standard theory with the

complicity theory in that the whistleblower is complicit in the acts that will cause wrongdoing and as

such takes away the factors ‘burden’ and ‘harm’ found in the standard theory and turns into a more

demanding obligation to perform whistleblowing.

Complicity theory

C1; what you reveal derives from your work for an organization;

C2; you are a voluntary member of that organization;

C3; you believe that the organization, though legitimate, is engaged in serious moral

wrongdoing;

C4; you believe that your work for that organization will contribute (more or less directly) to the

wrong if (but not only if) you do not publicly reveal what you know;

C5; you are justified in beliefs C3 and C4; and

C6; beliefs C3 and C4 are true.

Unlike the standard theory the complicity theory offers actual justification for whistleblowing, but it

offers little to the third paradox, the ‘paradox of failure’ in that most whistleblowers fail to ensure any

change to the circumstances that led to the harmful consequences.

To summarize, a whistleblower has to take into account the following issues.

1. Is reporting the problem actually legal. This is probably the hardest part to understand but

reporting confidential information from an organization (legal person) is not allowed in most

jurisdictions even if it reveals gross shortcomings. Some whistleblowers are not fazed by this as

they feel that there is a sense of urgency and will do without legal protection afforded to

whistleblowers.

2. Their personal circumstance can change radically for the worse; ostracizing by co-workers, loss

of income and personal relationships are all possible outcomes. Despite these negative

consequences that are part of becoming a whistleblower there exists the possibility that if you

hadn’t alerted proper authorities you could end up feeling remorse which can also lead to

problems in personal relations. If you’re failure to alert authorities has serious impact on the

lives of others through an accident or financial scandal you could also become implicated and

liable.

3. After a successful whistleblowing investigation the problem in the software maybe corrected

but the organizational resistance to fixing such dangerous errors will remain. Now the

whistleblower has no possibility of identifying more shortcomings in the software. This has been

referred to as the whistleblower paradox in that they usually fail in their original goal. Previous

failure by whistleblowers should serve as a reminder that the decision to reveal confidential

information has to be made based on the likely outcome. One author, Mathieu Bouville (2008),

Johnny Søraker, Assistant Professor at the University of Twente

maintains that if the whistleblower failed in their outcome than it was the wrong decisions to

make.

4. One of the immediate dangers that a potential whistleblower faces is the power of an

organization to silence an individual through a system of collectively turning members against

an individual. Such a thing is called mobbing and can be preceded or followed by a campaign of

surveillance directed at the whistleblower which in many cases may actually be legal. It is for

organizations legal to perform surveillance of communication channels used by employees if

they suspect something illegal is happening (Asscher & Steenbruggen, 2001). For an employee

to try and leak confidential information is cause for dismissal or at the very least an absence of

leave from work. As such some protection that can be enjoyed by whistleblowers such as a

source of income has been lost. It can be confidently concluded that whistleblowing should only

be regarded as the last possible step unless there is a an immediate urgency.

Final remarks on Whistleblowing How do you make sure that if you are going to whistleblow that it will actually work. How is

whistleblowing really done in real life. Is it possible to gauge the pros and cons beforehand and make a

decision based on one of the ethical theories described before? The short answers is no while the long

answers has to take into account that whistleblowing has seen a steady rise in recent years though it

should not be confused with simply leaking confidential information.

There are two ways of thinking about whistleblowing. The first is whether it is actually legal under the

circumstances and whether the person who leaks is doing the right thing and not committing self-harm.

The second only considers the value of the information that is leaked and its usefulness (utility) for

society as a whole. The second argument is not based on ethical theory or moral philosophy but on

common morality. If a person were to receive information about harm or wrongdoing they could not

legally be considered whistleblowers but would act more like spies even if the information is good for

society. To ensure the successful solving of a problem in a software system it may be more desirable for

the project leader or other insider to convince those in authority of his point of view in such a forceful

manner that they may be inclined to have such a person dismissed.

Conclusion The reader might come to the conclusion during the three preceding chapters that there is never a final

argument to be made. A discussion regarding ethical issues may swing decisively in one direction but is

in essence never over. From that point of view there is no action a person can undertake that can

subsequently be completely absolved by the use of ethical arguments, there is always a risk that there

are negative consequences from action or inaction. There is ultimately little choice but to take at least

some risk as it can never be completely excluded. To say so otherwise is impossible to proof, at least

according to mathematical philosophy in that what we consider to be truth are in fact based on axioms

that may be elementary but are still as artificial as Euclidian Mathematics. Ultimately, a decision to

deploy new technological solutions such as a software systems may need to be based solely on their

perceived utility while consideration is given to all possible stakeholders.

Johnny Søraker, Assistant Professor at the University of Twente

References [1] ACM/IEE Software Engineering Code of Ethics.

[2] Asscher, L. F., & Steenbruggen, W. (2001). Het Emailgeheim op de werkplek: over de

toelaatbaarheid van inbreuken op het communicatiegeheim van de werknemer in het digitale

tijdperk. Nederlands Juristenblad, 76(37), 1787-1794.

[3] Bouville, M. (2008). Whistle-blowing and morality. Journal of Business Ethics, 81(3), 579-585.

[4] Brooks Jr, F. P. (1987). No silver bullet-essence and accidents of software engineering. IEEE

computer, 20(4), 10-19.

[5] Davis, M. (1996). Some paradoxes of whistleblowing. Business & Professional Ethics Journal, 3-

19.

[6] Directive, C. (1985). DIRECTIVE 85/374/EEC. Official Journal L, 210(07/08), 0029-0033.

[7] Gödel, K. (1992). On Formally Undecidable Propositions of Principia Mathematical and Related

Systems. Dover Publications.

[8] IBISWorld. Retrieved from http://www.ibisworld.com/industry/default.aspx?indid=207 on

December 21 2013.

[9] Leveson, N. (1995). Medical devices: The therac-25. Appendix of: Safeware: System Safety and

Computers.

[10] Schiff, J. (2013). Professional Ethics: Should Software Engineers Adhere to a Professional Code of

Conduct? Slides for course CS301.

[11] Tavani, H. T. (2004). Ethics and technology: Ethical issues in an age of information and

communication technology. New York: Wiley.

[12] Wittgenstein, L. (1961). Tractatus logico-philosophicus, trans. DF Pears and BF.