首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.  相似文献   

2.
杜严勇 《科学学研究》2017,35(11):1608-1613
关于道德责任的承担主体及其责任分配是机器人伦理研究中一个重要问题。即使机器人拥有日益提高的自主程度和越来越强的学习能力,也并不意味着机器人可以独立承担道德责任,相反,应该由与机器人技术相关的人员与组织机构来承担道德责任。应该明确机器人的设计者、生产商、使用者、政府机构及各类组织的道德责任,建立承担责任的具体机制,才能有效避免"有组织的不负责任"现象。另外,从目前法学界关于无人驾驶技术的法律责任研究情况来看,大多数学者倾向于认为应该由生产商、销售商和使用者来承担责任,而无人驾驶汽车根本不构成责任主体。  相似文献   

3.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

4.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

5.
In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.  相似文献   

6.
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.  相似文献   

7.
当前自动驾驶汽车发展所面临伦理困境的一个核心问题是:是否应该将道德规范嵌入算法结构以及应当以何种方式嵌入。在面对未来可能的交通事故时,屏蔽信息而依靠“道德运气”进行随机选择和基于完全信息的人工智能系统自主抉择都存在严重困难,因此应当为自动驾驶汽车预设“道德算法”。而对于如何决定“道德算法”的问题,鉴于现有道德原则间的相互冲突、道德决策的复杂性以及人类道德判断的情境化特点,基于某种人类既定的道德原则或道德规范是不现实的。  相似文献   

8.

Does cruel behavior towards robots lead to vice, whereas kind behavior does not lead to virtue? This paper presents a critical response to Sparrow’s argument that there is an asymmetry in the way we (should) think about virtue and robots. It discusses how much we should praise virtue as opposed to vice, how virtue relates to practical knowledge and wisdom, how much illusion is needed for it to be a barrier to virtue, the relation between virtue and consequences, the moral relevance of the reality requirement and the different ways one can deal with it, the risk of anthropocentric bias in this discussion, and the underlying epistemological assumptions and political questions. This response is not only relevant to Sparrow’s argument or to robot ethics but also touches upon central issues in virtue ethics.

  相似文献   

9.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

10.
Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.  相似文献   

11.
谢江佩  戴馨  黎常 《科研管理》2020,41(7):201-209
基于趋近-抑制理论,对权力感知与建言行为之间的内在心理认知机制进行深入分析。通过对3个时点69个团队311名团队成员的问卷调查,探讨了团队成员权力感知对建言行为的影响机制。结果表明:1)权力感知与建设性变革责任感存在正相关关系;2)建设性变革责任感中介了权力感知对建言行为的影响;3)团队权力合法性在权力感知与建设性变革责任感的关系中起调节作用;进一步,4)团队权力合法性调节了建设性变革责任感对权力感知?建言行为中介作用,表现为被调节的中介关系。最后,探讨了本文的理论意义及实践启示。  相似文献   

12.
This paper argues against the moral Turing test (MTT) as a framework for evaluating the moral performance of autonomous systems. Though the term has been carefully introduced, considered, and cautioned about in previous discussions (Allen et al. in J Exp Theor Artif Intell 12(3):251–261, 2000; Allen and Wallach 2009), it has lingered on as a touchstone for developing computational approaches to moral reasoning (Gerdes and Øhrstrøm in J Inf Commun Ethics Soc 13(2):98–109, 2015). While these efforts have not led to the detailed development of an MTT, they nonetheless retain the idea to discuss what kinds of action and reasoning should be demanded of autonomous systems. We explore the flawed basis of an MTT in imitation, even one based on scenarios of morally accountable actions. MTT-based evaluations are vulnerable to deception, inadequate reasoning, and inferior moral performance vis a vis a system’s capabilities. We propose verification—which demands the design of transparent, accountable processes of reasoning that reliably prefigure the performance of autonomous systems—serves as a superior framework for both designer and system alike. As autonomous social robots in particular take on an increasing range of critical roles within society, we conclude that verification offers an essential, albeit challenging, moral measure of their design and performance.  相似文献   

13.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

14.
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them.  相似文献   

15.
Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.  相似文献   

16.
张冬玲 《科教文汇》2012,(11):97-98
伴随着新课程理念的到来,教师应该在确保转变自身教学观念的同时合理引导学生转变其学习方式,积极倡导和鼓励自主、探究以及相互合作的学习理念。随着新课改的逐步深入和普及,初中思想品德在教学的过程中出现了不少质疑声和现实性的问题。结合一定的实践经验,本文主要围绕初中思想教学质疑以及针对所存在的问题提出了相关的解决对策,希望可以为初中思想品德教学课程改革的顺利实施提供一些建议和参考。  相似文献   

17.
I argue that the problem of ‘moral luck’ is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its ‘logical malleability’, leads to ever greater levels of complexity, unreliability and uncertainty. The ever widening contexts of application in turn lead to greater scope for the operation of chance and the phenomenon of moral luck. Moral luck bears down most heavily on notions of professional responsibility, the identification and attribution of responsibility. It is immunity from luck that conventionally marks out moral value from other kinds of values such as instrumental, technical, and use value. The paper describes the nature of moral luck and its erosion of the scope of responsibility and agency. Moral luck poses a challenge to the kinds of theoretical approaches often deployed in Computer Ethics when analyzing moral questions arising from the design and implementation of information and communication technologies. The paper considers the impact on consequentialism; virtue ethics; and duty ethics. In addressing cases of moral luck within Computer Ethics, I argue that it is important to recognise the ways in which different types of moral systems are vulnerable, or resistant, to moral luck. Different resolutions are possible depending on the moral framework adopted. Equally, resolution of cases will depend on fundamental moral assumptions. The problem of moral luck in Computer Ethics should prompt us to new ways of looking at risk, accountability and responsibility.  相似文献   

18.
生态环境的建设与优化关系到国家的生态安全与生态文明的建设.生态文明的实现形式分为萌芽、准备、建设和高级四个阶段,制度化的生态责任和义务是生态文明发展的必要步骤和重要途径.因此,构建生态义务制势在必行,生态义务理应成为宪法规定的公民的基本义务.从实现生态文明社会的视角,文章界定了生态义务制的法制内涵与特性.作为新型的义务形式,生态义务制可以与其他宪法规定的义务制有效联结起来.文章认为,生态义务制的实现关键在于:一是从法律关系明确生态环境的责任客体和责任主体;二是从制度上设计出履行生态义务制度的框架与运行机制.生态义务制度的实行,需要动员包括政府在内的全民责任.政府作为公权力组织,负有生态规划、管理与购买三大政治责任;不同形式的个体与组织,基于法定社会责任或基本伦理道理,也要依法履行与承担各自相应的生态义务.  相似文献   

19.
This article explores recent developments inthe regulation of Internet speech, inparticular, injurious or defamatory speech andthe impact the attempts at regulation arehaving on the `body' in the sense of theindividual person who speaks through the mediumof the Internet and upon those harmed by thatspeech. The article proceeds in threesections. First, a brief history of the legalattempts to regulate defamatory Internet speechin the United States is presented; a shortcomparative discussion of defamation law in theUK and Australia is included. As discussedbelow, this regulation has altered thetraditional legal paradigm of responsibilityand, as a result, creates potential problems forthe future of unrestricted and even anonymousspeech on the Internet. Second, an ethicalassessment is made of the defamatory speechenvironment in order to determine which actorshave moral responsibility for the harm causedby defamatory speech. This moral assessment iscompared to the developing and anticipatedlegal paradigm to identify possible conformityof moral and legal tenants or to recognize theconflict between morality and law in assigningresponsibility to defamatory actors. Thisassessment then concludes with possiblesuggestions for changes in the legal climategoverning the regulation of defamatory speechon the Internet, as well as prediction of theresult should the legal climate continue todevelop on its present course. This is not tosuggest that all law, or even the law ofdefamation, be structured to reflect thesubjectivity of a moral construct, but since itis the authors position that the legalassignment of liability in online settings ismisaligned, this reflection can serve asbeginning reassessment of that assignment.  相似文献   

20.
目前应用领域的机器人缺乏意识、精神状态和感觉这些情感条件,机器人只是按照人类设定的程序进行遵循一定的规则行为。判定一个机器人能否称得上人工物道德行为体(AMAs).似乎取决于是否具有情感因素。道德与情感之间有着紧密联系的关系。然而.行为主义和表现主义认为,即使缺乏情感的机器也应当受到道德关护。从机器人的应用实践来看.无论是认知缺陷角色的机器人、奴仆角色机器人还是财产物角色机器人.他们都有相应的道德地位,都应当受到不同方式的伦理关护。随着人工智能的发展,我们认为,未来我们一定能够制造出一种具有情感的AMAs机器人。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号