首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper offers an ethical framework for the development of robots as home companions that are intended to address the isolation and reduced physical functioning of frail older people with capacity, especially those living alone in a noninstitutional setting. Our ethical framework gives autonomy priority in a list of purposes served by assistive technology in general, and carebots in particular. It first introduces the notion of “presence” and draws a distinction between humanoid multi-function robots and non-humanoid robots to suggest that the former provide a more sophisticated presence than the latter. It then looks at the difference between lower-tech assistive technological support for older people and its benefits, and contrasts these with what robots can offer. This provides some context for the ethical assessment of robotic assistive technology. We then consider what might need to be added to presence to produce care from a companion robot that deals with older people’s reduced functioning and isolation. Finally, we outline and explain our ethical framework. We discuss how it combines sometimes conflicting values that the design of a carebot might incorporate, if informed by an analysis of the different roles that can be served by a companion robot.  相似文献   

2.
Current uses of robots in classrooms are reviewed and used to characterise four scenarios: (s1) Robot as Classroom Teacher; (s2) Robot as Companion and Peer; (s3) Robot as Care-eliciting Companion; and (s4) Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as their friends and companions, when sensors are used to measure children’s responses, and when records are kept. Social robots designed to appear as if they understand and care for humans necessarily involve some deception (itself a complex notion), and could increase the risk of reduced human contact. Children could form attachments to robot companions (s2 and s3), or robot teachers (s1) and this could have a deleterious effect on their social development. There are also concerns about the ability, and use of robots to control or make decisions about children’s behaviour in the classroom. It is concluded that there are good reasons not to welcome fully fledged robot teachers (s1), and that robot companions (s2 and 3) should be given a cautious welcome at best. The limited circumstances in which robots could be used in the classroom to improve the human condition by offering otherwise unavailable educational experiences are discussed.  相似文献   

3.
It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the design process of robots known as Care Centered Value Sensitive Design (CCVSD). Although this approach was originally and intentionally designed for the healthcare domain, the aim of this paper is to present a preliminary study of how personal and professional service robots might also be evaluated using the CCVSD approach. The normative foundations for CCVSD come from its reliance on the care ethics tradition and in particular the use of care practices for: (1) structuring the analysis and, (2) determining the values of ethical import. To apply CCVSD outside of healthcare one must show that the robot has been integrated into a care practice. Accordingly, the practice into which the robot is to be used must be assessed and shown to meet the conditions of a care practice. By investigating the foundations of the approach I hope to show why it may be applicable for service robots and further to give examples of current robot prototypes that can and cannot be evaluated using CCVSD.  相似文献   

4.
This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework.  相似文献   

5.
白玫  朱庆华 《现代情报》2018,38(12):3-8
通过对江汉区13 000多名老龄用户的调研,了解当地老年人的年龄情况、居住状况、经济状况及身体情况,分析其智慧养老服务需求及志愿服务意愿,并探究影响其需求和志愿服务意愿的因素,并依此描绘出目标用户特征,以辅助决策。数据分析结果显示:1)不同类别老年人其智慧养老服务需求和影响因素不同;2)不同年龄段的普通老人和5类老人,其智慧养老需求及志愿服务意愿的影响因素会因年龄段的不同而发生变化;3)智慧养老服务推广、实施的目标用户特征是:高龄、独居、身体较差的普通老人;高龄、独居、低收入、身体较差的5类老人;4)可发展成志愿者的目标用户特征为:独居/空巢普通老人;低龄、独居/空巢、低收入、身体较好的5类老人。  相似文献   

6.
As we near a time when robots may serve a vital function by becoming caregivers, it is important to examine the ethical implications of this development. By applying the capabilities approach as a guide to both the design and use of robot caregivers, we hope that this will maximize opportunities to preserve or expand freedom for care recipients. We think the use of the capabilities approach will be especially valuable for improving the ability of impaired persons to interface more effectively with their physical and social environments.  相似文献   

7.
This paper explores the relationship between dignity and robot care for older people. It highlights the disquiet that is often expressed about failures to maintain the dignity of vulnerable older people, but points out some of the contradictory uses of the word ‘dignity’. Certain authors have resolved these contradictions by identifying different senses of dignity; contrasting the inviolable dignity inherent in human life to other forms of dignity which can be present to varying degrees. The capability approach (CA) is introduced as a different but tangible account of what it means to live a life worthy of human dignity. It is used here as a framework for the assessment of the possible effects of eldercare robots on human dignity. The CA enables the identification of circumstances in which robots could enhance dignity by expanding the set of capabilities that are accessible to frail older people. At the same time, it is also possible within its framework to identify ways in which robots could have a negative impact, by impeding the access of older people to essential capabilities. It is concluded that the CA has some advantages over other accounts of dignity, but that further work and empirical study is needed in order to adapt it to the particular circumstances and concerns of those in the latter part of their lives.  相似文献   

8.
Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like (normally considered machine morality) and discuss a number of ethical questions about the design, use, and treatment of such moral robots in society (normally considered robot ethics). Instead of searching for a fixed set of criteria of a robot’s moral competence I identify the multiple elements that make up human moral competence and probe the possibility of designing robots that have one or more of these human elements, which include: moral vocabulary; a system of norms; moral cognition and affect; moral decision making and action; moral communication. Juxtaposing empirical research, philosophical debates, and computational challenges, this article adopts an optimistic perspective: if robotic design truly commits to building morally competent robots, then those robots could be trustworthy and productive partners, caretakers, educators, and members of the human community. Moral competence does not resolve all ethical concerns over robots in society, but it may be a prerequisite to resolve at least some of them.  相似文献   

9.
医疗康复机器人研究进展及趋势   总被引:1,自引:0,他引:1       下载免费PDF全文
随着人们对医疗健康手段和过程提出的精准、微创、高效及低成本等方面的更高需求,医疗康复机器人技术也获得了各国的极大关注,并得到了日新月异的发展。目前医疗康复机器人主要用于外科手术、功能康复及辅助护理等方面,但随着重要技术的突破和进展,未来机器人技术有可能会应用到医疗健康的各个领域。医疗康复领域越来越倾向于人与机器自然、精准的交互,近年来,以人的智能和机器智能结合及人机交互为代表的技术突破使得人与机器之间的结合越来越紧密,借助人机交互技术和方法,将人的智能和机器智能结合起来,使二者优势互补、协同工作,并将在医疗康复方面孕育出重大的理论创新和技术方法突破。社会需求、技术革新和人机智能融合极大的促进了医疗康复机器人的发展。医疗康复机器人涉及人类生命健康的特殊领域,存在潜在的经济市场,已被多个国家列为战略性新兴产业,我国也需进一步大力开展医疗康复机器人的研发,推动该战略新兴产业的发展,以应对我国国民对健康服务的需求(医疗、康复及老龄化)。  相似文献   

10.
The development of autonomous, robotic weaponry is progressing rapidly. Many observers agree that banning the initiation of lethal activity by autonomous weapons is a worthy goal. Some disagree with this goal, on the grounds that robots may equal and exceed the ethical conduct of human soldiers on the battlefield. Those who seek arms-control agreements limiting the use of military robots face practical difficulties. One such difficulty concerns defining the notion of an autonomous action by a robot. Another challenge concerns how to verify and monitor the capabilities of rapidly changing technologies. In this article we describe concepts from our previous work about autonomy and ethics for robots and apply them to military robots and robot arms control. We conclude with a proposal for a first step toward limiting the deployment of autonomous weapons capable of initiating lethal force.  相似文献   

11.
In the last decade we have entered the era of remote controlled military technology. The excitement about this new technology should not mask the ethical questions that it raises. A fundamental ethical question is who may be held responsible for civilian deaths. In this paper we will discuss the role of the human operator or so-called ‘cubicle warrior’, who remotely controls the military robots behind visual interfaces. We will argue that the socio-technical system conditions the cubicle warrior to dehumanize the enemy. As a result the cubicle warrior is morally disengaged from his destructive and lethal actions. This challenges what he should know to make responsible decisions (the so-called knowledge condition). Nowadays and in the near future, three factors will influence and may increase the moral disengagement even further due to the decrease of locus of control orientation: (1) photo shopping the war; (2) the moralization of technology; (3) the speed of decision-making. As a result, cubicle warriors cannot be held reasonably responsible anymore for the decisions they make.  相似文献   

12.
Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. Moral concerns for agents such as intelligent search machines are relatively simple, while highly intelligent and autonomous artifacts with significant impact and complex modes of agency must be equipped with more advanced ethical capabilities. Systems like cognitive robots are being developed that are expected to become part of our everyday lives in future decades. Thus, it is necessary to ensure that their behaviour is adequate. In an analogy with artificial intelligence, which is the ability of a machine to perform activities that would require intelligence in humans, artificial morality is considered to be the ability of a machine to perform activities that would require morality in humans. The capacity for artificial (artifactual) morality, such as artifactual agency, artifactual responsibility, artificial intentions, artificial (synthetic) emotions, etc., come in varying degrees and depend on the type of agent. As an illustration, we address the assurance of safety in modern High Reliability Organizations through responsibility distribution. In the same way that the concept of agency is generalized in the case of artificial agents, the concept of moral agency, including responsibility, is generalized too. We propose to look at artificial moral agents as having functional responsibilities within a network of distributed responsibilities in a socio-technological system. This does not take away the responsibilities of the other stakeholders in the system, but facilitates an understanding and regulation of such networks. It should be pointed out that the process of development must assume an evolutionary form with a number of iterations because the emergent properties of artifacts must be tested in real world situations with agents of increasing intelligence and moral competence. We see this paper as a contribution to the macro-level Requirement Engineering through discussion and analysis of general requirements for design of ethical robots.  相似文献   

13.
The internet of things is increasingly spreading into the domain of medical and social care. Internet-enabled devices for monitoring and managing the health and well-being of users outside of traditional medical institutions have rapidly become common tools to support healthcare. Health-related internet of things (H-IoT) technologies increasingly play a key role in health management, for purposes including disease prevention, real-time tele-monitoring of patient’s functions, testing of treatments, fitness and well-being monitoring, medication dispensation, and health research data collection. H-IoT promises many benefits for health and healthcare. However, it also raises a host of ethical problems stemming from the inherent risks of Internet enabled devices, the sensitivity of health-related data, and their impact on the delivery of healthcare. This paper maps the main ethical problems that have been identified by the relevant literature and identifies key themes in the on-going debate on ethical problems concerning H-IoT.  相似文献   

14.
关于我国生命科学技术伦理治理机制的探讨   总被引:1,自引:0,他引:1  
生命科学技术的发展在为人类带来进步和利益的同时,也带来了严重的伦理、社会和法律问题.由于生命伦理问题存在着不同的观点,且涉及社会不同的方面,因此,国际上越来越多采取治理的方式解决生命伦理问题.本论文阐明生命伦理治理机制应包含的主要内容,分析我国生命伦理研究与管理的成绩及存在的问题,提出建设我国生命科学技术伦理治理机制的若干设想.  相似文献   

15.
We can learn about human ethics from machines. We discuss the design of a working machine for making ethical decisions, the N-Reasons platform, applied to the ethics of robots. This N-Reasons platform builds on web based surveys and experiments, to enable participants to make better ethical decisions. Their decisions are better than our existing surveys in three ways. First, they are social decisions supported by reasons. Second, these results are based on weaker premises, as no exogenous expertise (aside from that provided by the participants) is needed to seed the survey. Third, N-Reasons is designed to support experiments so we can learn how to improve the platform. We sketch experimental results that show the platform is a success as well as pointing to ways it can be improved.  相似文献   

16.
Radio Frequency Identification (RFID) systems identify and track objects, animals and, in principle, people. The ability to gather information obtained by tracking consumer goods, government documents, monetary transactions and human beings raises a number of interesting and important privacy issues. Moreover, RFID systems pose an ensemble of other ethical challenges related to appropriate uses and users of such systems. This paper reviews a number of RFID applications with the intention of identifying the technology’s benefits and possible misuses. We offer an overview and discussion of the most important ethical issues concerning RFID, and describes and examine some methods of protecting privacy. Norman G. Einspruch serves as a consultant to several high-technology companies, one of which is in the RFID components and systems business.  相似文献   

17.
叶文琴 《软科学》2004,18(4):75-77
传统上我们在评价企业的一项决策是否合理时,大都从政治、法律、经济、技术等角度进行评价。但随着现代社会人们对企业社会责任的日益重视,企业在评价一项决策是否合理时,加上伦理道德的考虑也就显得日益必要。根据西方学术界的已有研究成果,提炼出企业伦理决策过程的三个构成要素,即“伦理感知”、“伦理判断”和“伦理意图”,从理论上阐述了三者的关系并加以实证检验,说明三者构成企业伦理决策过程是可行的、正确的。  相似文献   

18.
随着民办养老机构在解决当前城市老年人口养老问题中发挥越来越重要的力量同时,民办养老机构的服务品质也成为人们关注的焦点。本文指出,民办养老机构服务中普遍存在的老年社会工作职能缺失的问题,使民办养老机构在老人的“入住评估”、“生活质量干预”和护工的“工作压力与负面情绪疏导”方面不能有效提供服务,从而导致不能提供个性需求有效照护,使老人生活质量满意度不高,造成护工人员的频繁流失。本文认为民办养老机构的这一普遍问题成因一是护工紧缺,投资者与管理者暂时以身体护理效果为唯一的效益追求;二是投资者和管理者以节约成本为出发点,节省雇佣专业社会工作者的开支;三是管理人员对社会工作知识与理念的“文盲化”,使老年社会工作职能缺失。笔者认为老年社会工作职能缺失将成为制约民办养老机构服务品质提升的关键性因素。  相似文献   

19.
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly.  相似文献   

20.
近年来,我国社会变革正处于攻坚期,人们价值观多元化,在对自身的利益肯定和诉求下,企业责任失控、利益失衡、经济失信等事件层出不穷。因此,如何改善组织和员工的伦理行为是企业管理者亟待解决的重要问题之一。本文通过对伦理氛围、主动伦理行为及其关系深入研究,从而帮助组织营造积极的伦理氛围。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号