首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

2.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

3.
Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether autonomous robots ought to be programmed to be pacifists. The answer arrived at is “Yes”, if we decide to create autonomous robots, they ought to be pacifists. This is to say that robots ought not to be programmed to willingly and intentionally kill human beings, or, by extension, participate in or promote warfare, as something that predictably involves the killing of humans. Insofar as we are the ones that will be determining the content of the robot’s value system, then we ought to program robots to be pacifists, rather than ‘warists’. This is (in part) because we ought to be pacifists, and creating and programming machines to be “autonomous lethal robotic systems” directly violates this normative demand on us. There are no mitigating reasons to program lethal autonomous machines to contribute to or participate in warfare. Even if the use of autonomous lethal robotic systems could be consistent with received just war theory and the international laws of war, and even if their involvement could make warfare less inhumane in certain ways, these reasons do not compensate for the ubiquitous harms characteristic of modern warfare. In this paper, I provide four main reasons why autonomous robots ought to be pacifists, most of which do not depend on the truth of pacifism. The strong claim being argued for here is that automated warfare ought not to be pursued. The weaker claim being argued for here is that automated warfare ought not to be pursued, unless it is the most pacifist option available at the time, and other alternatives have been reasonably explored, and we are simultaneously promoting a (long term) pacifist agenda in (many) other ways. Thus, the more ambitious goal of this paper is to convince readers that automated warfare is something that we ought not to promote or pursue, while the more modest—and I suspect, more palatable—goal is to spark sustained critical discussion about the assumptions underlying the drive towards automated warfare, and to generate legitimate consideration of its pacifist alternatives,in theory, policy, and practice.  相似文献   

4.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

5.
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.  相似文献   

6.
近年来,以建构主义作为理论支撑,自主学习理念在国内外持续发展,培养学生自主学习能力已经成为大学英语教学的重点,而军队院校由于管理和培养模式的特点使得大学生在英语自主学习方面存在一定问题.本文根据自主学习理论,结合军队院校实际,就如何培养学生的英语自主学习能力提出了一系列措施和方案.  相似文献   

7.
因民族乡法律地位模糊,导致其在城镇化进程中出现撤并不规范的现象,影响少数民族权益,出现民族政策贯彻不实、撤并后待遇不一、民族乡自治与村民自治难以衔接等问题。为解决上述问题,我们应在法律中规定民族镇,并将民族乡镇列入民族区域自治地方,制定《中华人民共和国民族乡(镇)法》,以便全面发挥民族乡价值。  相似文献   

8.
侯香浪 《科教文汇》2013,(26):10-12
培养学生的自主学习能力是外语教学最重要的教学目标之一。很多学者认为教师的自主能力是学习者自主能力的前提和保证,所以有必要对大学外语教师自主发展的途径进行研究。本文首先探讨了教师自主和学习者自主的概念及其关系,然后分析了教师自主发展的现状,指出教师自主发展存在的主要问题是教师的自主专业发展能力和自主科研能力较弱,最后提出了提高外语教师自主能力的三条途径:加强外语教师自主元认知能力、开展教学反思和教学行动研究。  相似文献   

9.
   企业如何走出技术收购后“整合”还是“自治”的困境是一个十分重要而又缺乏深入探究的问题。以高科技企业为研究对象,构建了收购前技术甄选、资源重叠以及收购后战略模式选择三者间的理论模型,并提出研究假设,运用多元线性回归分析对所提研究假设进行实证检验。研究结果表明:界内技术甄选对整合型战略选择有显著的正向影响,但对自治型战略选择的影响不显著;跨界技术甄选对自治型战略选择有显著的正向影响,但对整合型战略选择的影响不显著。技术重叠正向调节界内技术甄选与整合型战略选择之间的关系,而负向调节跨界技术甄选与自治型战略选择之间的关系;关系重叠负向调节界内技术甄选与整合型战略选择之间的关系,而正向调节跨界技术甄选与自治型战略选择之间的关系。研究结论从整体价值创造层面揭示了技术收购后收购方企业如何进行战略模式选择的内在机制,为我国企业有效解决技术收购后“整合”还是“自治”战略这一现实问题提供了理论依据。  相似文献   

10.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

11.
徐鹏 《科技广场》2011,(1):42-44
机器人技术作为20世纪自动控制领域的一项伟大成就已经取得了长足的发展,移动机器人也越来越多地应用到了各个行业中。移动机器人具有高度自规划、自组织和自适应能力,适合工作于复杂的非结构化环境中。本文以自主移动机器人为背景,着重对其关键的路径规划技术进行研究和探讨。  相似文献   

12.
Values such as respect for autonomy, safety, enablement, independence, privacy and social connectedness should be reflected in the design of social robots. The same values should affect the process by which robots are introduced into the homes of older people to support independent living. These values may, however, be in tension. We explored what potential users thought about these values, and how the tensions between them could be resolved. With the help of partners in the ACCOMPANY project, 21 focus groups (123 participants) were convened in France, the Netherlands and the UK. These groups consisted of: (i) older people, (ii) informal carers and (iii) formal carers of older people. The participants were asked to discuss scenarios in which there is a conflict between older people and others over how a robot should be used, these conflicts reflecting tensions between values. Participants favoured compromise, persuasion and negotiation as a means of reaching agreement. Roles and related role-norms for the robot were thought relevant to resolving tensions, as were hypothetical agreements between users and robot-providers before the robot is introduced into the home. Participants’ understanding of each of the values—autonomy, safety, enablement, independence, privacy and social connectedness—is reported. Participants tended to agree that autonomy often has priority over the other values, with the exception in certain cases of safety. The second part of the paper discusses how the values could be incorporated into the design of social robots and operationalised in line with the views expressed by the participants.  相似文献   

13.
杜严勇 《科学学研究》2017,35(11):1608-1613
关于道德责任的承担主体及其责任分配是机器人伦理研究中一个重要问题。即使机器人拥有日益提高的自主程度和越来越强的学习能力,也并不意味着机器人可以独立承担道德责任,相反,应该由与机器人技术相关的人员与组织机构来承担道德责任。应该明确机器人的设计者、生产商、使用者、政府机构及各类组织的道德责任,建立承担责任的具体机制,才能有效避免"有组织的不负责任"现象。另外,从目前法学界关于无人驾驶技术的法律责任研究情况来看,大多数学者倾向于认为应该由生产商、销售商和使用者来承担责任,而无人驾驶汽车根本不构成责任主体。  相似文献   

14.
This paper critically engages with new self-tracking technologies. In particular, it focuses on a conceptual tension between the idea that disclosing personal information increases one’s autonomy and the idea that informational privacy is a condition for autonomous personhood. I argue that while self-tracking may sometimes prove to be an adequate method to shed light on particular aspects of oneself and can be used to strengthen one’s autonomy, self-tracking technologies often cancel out these benefits by exposing too much about oneself to an unspecified audience, thus undermining the informational privacy boundaries necessary for living an autonomous life.  相似文献   

15.
郎香香  尤丹丹 《科研管理》2021,42(6):166-175
自改革开放以来,我国进行了数次大规模的裁军活动,军人管理者已成为中国商界的重要力量。本文基于烙印和高阶理论,以2008-2016年沪深两市A股上市企业为研究对象,实证检验了管理者早期从军经历对企业研发投入的影响及其影响机制。通过面板模型回归,研究发现:第一,有从军经历管理者的企业具有较高的研发投入;第二,管理者从军经历对企业研发投入的影响在创始人从军经历管理者企业、无政治关联企业、非国有企业以及高竞争程度行业中更加显著;第三,管理者风险承担性在管理者从军经历与企业研发投入之间起到了部分中介作用。进一步研究表明,管理者从军经历不仅能够增加企业的研发投入,还能够提高企业的专利技术申请量。本文的研究丰富了管理者异质性领域的相关研究,打开了从军经历管理者对企业研发投入影响的“黑箱”,对上市企业尤其是创新型企业任用管理者提供了指导意义。  相似文献   

16.
With their prospect for causing both novel and known forms of damage, harm and injury, the issue of responsibility has been a recurring theme in the debate concerning autonomous vehicles. Yet, the discussion of responsibility has obscured the finer details both between the underlying concepts of responsibility, and their application to the interaction between human beings and artificial decision-making entities. By developing meaningful distinctions and examining their ramifications, this article contributes to this debate by refining the underlying concepts that together inform the idea of responsibility. Two different approaches are offered to the question of responsibility and autonomous vehicles: targeting and risk distribution. The article then introduces a thought experiment which situates autonomous vehicles within the context of crash optimisation impulses and coordinated or networked decision-making. It argues that guiding ethical frameworks overlook compound or aggregated effects which may arise, and which can lead to subtle forms of structural discrimination. Insofar as such effects remain unrecognised by the legal systems relied upon to remedy them, the potential for societal inequalities is increased and entrenched, situations of injustice and impunity may be unwittingly maintained. This second set of concerns may represent a hitherto overlooked type of responsibility gap arising from inadequate accountability processes capable of challenging systemic risk displacement.  相似文献   

17.
The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken into account, prior to the implementation and development stages. Here I examine whether the creation of virtuous autonomous machines is morally permitted by the central tenets of virtue ethics. It is argued that the creation of such machines violates certain tenets of virtue ethics, and hence that the creation and use of those machines is impermissible. One upshot of this is that, although virtue ethics may have a role to play in certain near-term Machine Ethics projects (e.g. designing systems that are sensitive to ethical considerations), machine ethicists need to look elsewhere for a moral framework to implement into their autonomous artificial moral agents, Wallach and Allen’s claims notwithstanding.  相似文献   

18.
功能测试是软件产品质量保证的最后屏障,很多应用程序由于进度的逼紧,而没有按照常规的质量保证那样实施。但是,最后的功能测试仍是必须的。而功能测试也叫黑盒测试,其测试用例的设计是很多测试人员的基本功,为此主要介绍了几种常见的黑盒测试用例的设计方法。  相似文献   

19.
杨阳  张新民 《情报科学》2008,26(12):1770-1773
文章对开源革命的成功经验进行了分析总结,并通过对开源组织和传统组织进行比较研究,探讨了开源革命对组织知识管理的借鉴意义.传统的知识管理阻碍了创新,阻碍了知识共享,产品和服务的质量相对较低.相比而言,开源组织的特点在于它的开放性和专家性.未来的知识工作也许会效仿开源组织的工作方式,由传统的封闭型知识管理转变为开放型知识管理,以便组织更好地实施知识管理活动.  相似文献   

20.
蒋健明  胡斌 《科技通报》2012,(1):160-166
规范多agent系统通过规范控制agent个体行为实现系统宏观目标。在自然语言描述的抽象规范和操作语言描述的具体规范之间存在巨大的语义鸿沟,采用分层规范模型将高层抽象规范逐级转换成底层具体规范是解决这一问题的有效手段。但是目前的分层规范模型存在着层次过多,层间界线模糊的问题,同时规范细化和层间规范转换完全依赖手工完成,不仅降低了系统可靠性和动态适应性,同时提高了规范系统开发成本。本文提出并实现了一种自治分层规范框架模型,该模型将规范系统分成三个层次,并且在形式化规范层建立概念语义蕴涵关系和行为责任关系实现了规范的自动细化,建立行为-操作映射模型实现不同层次的形式规范间的自动转化,提高了规范系统自治性,降低了规范系统开发成本,提高了系统可靠性和动态适应性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号