首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper argues in favor of more widespread and systematic applications of a virtue-based normative framework to questions about the ethical impact of information technologies, and social networking technologies in particular. The first stage of the argument identifies several distinctive features of virtue ethics that make it uniquely suited to the domain of IT ethics, while remaining complementary to other normative approaches. I also note its potential to reconcile a number of significant methodological conflicts and debates in the existing literature, including tensions between phenomenological and constructivist perspectives. Finally, I claim that a virtue-based perspective is needed to correct for a strong utilitarian bias in the research methodologies of existing empirical studies on the social and ethical impact of IT. The second part of the paper offers an abbreviated demonstration of the merits of virtue ethics by showing how it might usefully illuminate the moral dimension of emerging social networking technologies. I focus here on the potential impact of such technologies on three virtues typically honed in communicative practices: patience, honesty and empathy.  相似文献   

2.
Beginning with the initial premise that as the Internet has a global character, the paper will argue that the normative evaluation of digital information on the Internet necessitates an evaluative model that is itself universal and global in character (I agree, therefore, with Gorniak- Kocikowska’s claim that because of its global nature “computer ethics has to be regarded as global ethics”. (Gorniak-Kocikowska, Science and Engineering Ethics, 1996). The paper will show that information has a dual normative structure that commits all disseminators of information to both epistemological and ethical norms that are in principle universal and thus global in application. Based on this dual normative characterization of information the paper will seek to demonstrate: (1) that information and internet information (interformation) specifically, as a process and product of communication, has an inherent normative structure that commits its producers, disseminators, communicators and users, everyone in fact that deals with information, to certain mandatory epistemological and ethical commitments; and (2) that the negligent or purposeful abuse of information in violation of the epistemological and ethical commitments to which its inherent normative structure gives rise is also a violation of universal rights to freedom and wellbeing to which all agents are entitled by virtue of being agents, and in particular informational agents.  相似文献   

3.
当前自动驾驶汽车发展所面临伦理困境的一个核心问题是:是否应该将道德规范嵌入算法结构以及应当以何种方式嵌入。在面对未来可能的交通事故时,屏蔽信息而依靠“道德运气”进行随机选择和基于完全信息的人工智能系统自主抉择都存在严重困难,因此应当为自动驾驶汽车预设“道德算法”。而对于如何决定“道德算法”的问题,鉴于现有道德原则间的相互冲突、道德决策的复杂性以及人类道德判断的情境化特点,基于某种人类既定的道德原则或道德规范是不现实的。  相似文献   

4.
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.  相似文献   

5.
I argue that the problem of ‘moral luck’ is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its ‘logical malleability’, leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.  相似文献   

6.
黎常  金杨华 《科研管理》2021,42(8):9-16
人工智能在深刻影响人类社会生产生活方式的同时,也引发诸多伦理困境与挑战,建立新的科技伦理规范以推动人工智能更好服务人类,成为全社会共同关注的主题。本文从科技伦理的视角,围绕机器人、算法、大数据、无人驾驶等人工智能领域所出现的伦理主体、责任分担、技术安全、歧视与公平性、隐私与数据保护等问题,以及人工智能技术的伦理治理,对国内外相关研究成果进行回顾分析,并提出未来需要在中国情境下伦理原则与治理体系的建立、人工智能伦理研究的跨学科合作、理论分析与实践案例的融合、多元主体伦理角色分工与协作等方面进行进一步研究。  相似文献   

7.
李伟  华梦莲 《科学学研究》2020,38(4):588-594
自动驾驶汽车的核心问题是当自动驾驶汽车遇到车祸时,应该做出怎么的道德选择,是拯救驾驶员、乘客还是行人?面对这个问题,学者们从自动驾驶汽车本身视角提出了自己的见解,这些见解依据相应的伦理学原则,如功利主义、利己主义和康德义务论,但是都存在着一定的问题。本文从驾驶员本身出发,提出了道德原则自我选择的方法来解决该伦理难题,这个方法包含伦理直觉主义与自我预设道德选择。本文所提出来的方法将思考方式从智能机器的视角拉回到人类本身,提供了一个对该问题的另外一种思考角度。  相似文献   

8.
This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and explain why a given decision is ethically acceptable or not. It is illustrated on three ethical frameworks—utilitarian ethics, deontological ethics and the Doctrine of Double effect whose formal models are tested on ethical dilemmas so as to examine how they respond to those dilemmas and to highlight the issues at stake when a formal approach to ethical concepts is considered. The whole approach is instantiated on the drone dilemma, a thought experiment we have designed; this allows the discrepancies that exist between the judgements of the various ethical frameworks to be shown. The final discussion allows us to highlight the different sources of subjectivity of the approach, despite the fact that concepts are expressed in a more rigorous way than in natural language: indeed, the formal approach enables subjectivity to be identified and located more precisely.  相似文献   

9.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

10.
Information ethics: On the philosophical foundation of computer ethics   总被引:9,自引:5,他引:4  
The essential difficulty about Computer Ethics' (CE) philosophical status is a methodological problem: standard ethical theories cannot easily be adapted to deal with CE-problems, which appear to strain their conceptual resources, and CE requires a conceptual foundation as an ethical theory. Information Ethics (IE), the philosophical foundational counterpart of CE, can be seen as a particular case of “environmental” ethics or ethics of the infosphere. What is good for an information entity and the infosphere in general? This is the ethical question asked by IE. The answer is provided by a minimalist theory of deseerts: IE argues that there is something more elementary and fundamental than life and pain, namely being, understood as information, and entropy, and that any information entity is to be recognised as the centre of a minimal moral claim, which deserves recognition and should help to regulate the implementation of any information process involving it. IE can provide a valuable perspective from which to approach, with insight and adequate discernment, not only moral problems in CE, but also the whole range of conceptual and moral phenomena that form the ethical discourse.  相似文献   

11.
Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.  相似文献   

12.
Ethics and Information Technology - In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that...  相似文献   

13.
人工智能伦理准则与治理体系:发展现状和战略建议   总被引:1,自引:0,他引:1  
首先,界定人工智能伦理准则的基本概念,分析人工智能发展现状。然后,探讨导致人工智能伦理问题的主要原因,总结人工智能典型应用场景下的伦理问题,包括自动驾驶、智能媒体、智慧医疗、服务机器人等;此外,围绕应对人工智能伦理问题的基本原则探索治理框架体系,包括技术应对、道德规范、政策引导、法律规则等方面。最后,结合我国人工智能发展规划战略部署,指出在社会治理的落地过程中宜采取分层次、多维度的治理体系,并提出在人工智能伦理准则和治理方面的具体措施建议(2020-2035年),包括社会宣传、标准体系、法律法规等方面。  相似文献   

14.
Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, in the making of moral decisions. However, assembling a system from the bottom-up which is capable of accommodating moral considerations draws attention to the importance of a much wider array of mechanisms in honing moral intelligence. Moral machines need not emulate human cognitive faculties in order to function satisfactorily in responding to morally significant situations. But working through methods for building AMAs will have a profound effect in deepening an appreciation for the many mechanisms that contribute to a moral acumen, and the manner in which these mechanisms work together. Building AMAs highlights the need for a comprehensive model of how humans arrive at satisfactory moral judgments.  相似文献   

15.
On the intrinsic value of information objects and the infosphere   总被引:4,自引:3,他引:4  
What is the most general common set ofattributes that characterises something asintrinsically valuableand hence as subject to some moral respect, andwithout which something would rightly beconsidered intrinsically worthless or even positivelyunworthy and therefore rightly to bedisrespected in itself? Thispaper develops and supports the thesis that theminimal condition of possibility of an entity'sleast intrinsic value is to be identified with itsontological status as an information object.All entities, even when interpreted as only clusters ofinformation, still have a minimal moral worthqua information objects and so may deserve to be respected. Thepaper is organised into four main sections.Section 1 models moral action as an information systemusing the object-oriented programmingmethodology (OOP). Section 2 addresses the question of whatrole the several components constituting themoral system can have in an ethical analysis. If theycan play only an instrumental role, thenComputer Ethics (CE) is probably bound to remain at most apractical, field-dependent, applied orprofessional ethics. However, Computer Ethics can give rise to amacroethical approach, namely InformationEthics (IE), if one can show that ethical concern should beextended to include not only human, animal orbiological entities, but also information objects. Thefollowing two sections show how this minimalistlevel of analysis can be achieved. Section 3 provides anaxiological analysis of information objects. Itcriticises the Kantian approach to the concept ofintrinsic value and shows that it can beimproved by using the methodology introduced in the first section.The solution of the Kantian problem prompts thereformulation of the key question concerningthe moral worth of an entity: what is theintrinsic value of x qua an object constituted by itsinherited attributes? In answering thisquestion, it is argued that entitiescan share different observable propertiesdepending on the level of abstraction adopted,and that it is still possible to speak of moral value even at thehighest level of ontological abstractionrepresented by the informational analysis. Section 4 develops aminimalist axiology based on the concept ofinformation object. It further supports IE's position byaddressing five objections that may undermineits acceptability.  相似文献   

16.
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.  相似文献   

17.
It has been argued that moral problems in relation to Information Technology (IT) require new theories of ethics. In recent years, an interesting new theory to address such concerns has been proposed, namely the theory of Information Ethics (IE). Despite the promise of IE, the theory has not enjoyed public discussion. The aim of this paper is to initiate such discussion by critically evaluating the theory of IE.  相似文献   

18.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

19.
Artificial Life (ALife) has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other is “wet” ALife that uses biological material to realize what has only been simulated on computers, effectively wet ALife uses biological material as a kind of computer. This is challenging to the field of computer ethics as it points towards a future in which computer and bioethics might have shared concerns. The emerging studies into wet ALife are likely to provide strong empirical evidence for ALife’s most challenging hypothesis: that life is a certain set of computable functions that can be duplicated in any medium. I believe this will propel ALife into the midst of the mother of all cultural battles that has been gathering around the emergence of biotechnology. Philosophers need to pay close attention to this debate and can serve a vital role in clarifying and resolving the dispute. But even if ALife is merely a computer modeling technique that sheds light on living systems, it still has a number of significant ethical implications such as its use in the modeling of moral and ethical systems, as well as in the creation of artificial moral agents.  相似文献   

20.
随着人工智能技术的发展,自治型智能机器人开始走进人们的生活视阈。"机器人伦理学"在国外的兴起正是这一背景下的伦理反思。然而,"机器人伦理学"的研究对象"机器人"有着特定的涵义,其存在领域也涵盖劳动服务、军事安全、教育科研、娱乐、医疗保健、环境、个人护理与感情慰藉等各个方面。其中,安全性问题、法律与伦理问题和社会问题成为"机器人伦理学"研究的三大问题域。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号